Section 01
[Introduction] Reasoning Consistency Scanner: A Tool to Resolve 'Discrepancy Between Reasoning and Answer' in Large Models
This article introduces the open-source tool Reasoning Consistency Scanner, which aims to detect inconsistencies between the Chain of Thought (CoT) reasoning process and the final answer in Large Language Models (LLMs). This tool helps identify the phenomenon where models 'say one thing but do another', improving the reliability and interpretability of AI systems, and is applicable to scenarios such as model evaluation, data cleaning, and prompt optimization.