Section 01
Introduction: S3Q-Reasoning—Reducing LLM Hallucinations with Structured Scratchpads
The hallucination problem of large language models (LLMs) hinders their widespread application. The S3Q-Reasoning project proposes using explicit structured scratchpads to make models expose assumptions in reasoning, effectively reducing hallucinations and improving answer accuracy and interpretability. This method is lightweight and easy to implement, suitable for various scenarios.