Section 01
[Introduction] Self-Role Prompting: A New Paradigm for Zero-Shot Reasoning in Large Models
This article proposes an innovative zero-shot reasoning strategy—Self-Role Prompting—which enables large language models to autonomously select the most suitable reasoning role before solving a problem. This strategy uses a two-stage framework (role identification + role-driven reasoning) without the need for manually designed role templates, and has achieved significant results in mathematical reasoning (AQUA-RAT), commonsense question answering (CommonsenseQA), and strategic reasoning (StrategyQA) tasks, providing a new direction for prompt engineering.