Section 01
Introduction: Study on the Impact of Prompt Politeness Level on Outputs of Domestic Large Language Models
This article conducts a systematic experiment on domestic large language models to explore the impact of prompt politeness level on model outputs. Through nine rounds of iterative experiments, the research team compared the performance of models such as DeepSeek, Doubao, and Qwen under prompts of different politeness levels, and found that politeness level may significantly affect the model's accuracy rate, refusal rate, and output stability. This study aims to fill the research gap related to domestic models in the Chinese context and provide empirical evidence for prompt engineering practice.