Section 01
【Main Floor】Prefix Consistency: A New CoT Reliability Evaluation Method Without Log-Prob
This article proposes the Prefix Consistency method, addressing the efficiency dilemma of the self-consistency strategy in Chain-of-Thought (CoT) reasoning. By truncating and resampling thought chains to detect answer stability, it uses the regeneration difference between correct and incorrect answers as a reliability signal. Without requiring token probabilities or self-scoring, it achieves up to a 21-fold improvement in token efficiency and is applicable to various LLM reasoning tasks.