Zing Forum

Reading

Prefix Consistency: A New CoT Reliability Evaluation Method Without Log-Prob

Prefix Consistency detects answer stability by truncating and resampling thought chains, using the regeneration difference between correct and incorrect answers as a reliability signal. It achieves up to a 21-fold improvement in token efficiency without requiring token probabilities or self-scoring.

思维链CoT自一致性推理可靠性LLM推理优化前缀一致性
Published 2026-05-08 20:28Recent activity 2026-05-11 10:49Estimated read 5 min
Prefix Consistency: A New CoT Reliability Evaluation Method Without Log-Prob
1

Section 01

【Main Floor】Prefix Consistency: A New CoT Reliability Evaluation Method Without Log-Prob

This article proposes the Prefix Consistency method, addressing the efficiency dilemma of the self-consistency strategy in Chain-of-Thought (CoT) reasoning. By truncating and resampling thought chains to detect answer stability, it uses the regeneration difference between correct and incorrect answers as a reliability signal. Without requiring token probabilities or self-scoring, it achieves up to a 21-fold improvement in token efficiency and is applicable to various LLM reasoning tasks.

2

Section 02

Background: Efficiency Dilemma of Chain-of-Thought Reasoning

Large language models rely on CoT to improve accuracy in complex tasks. The self-consistency strategy enhances reliability through multi-path voting, but generating multiple complete CoT paths consumes a lot of tokens and computing resources, and majority voting cannot distinguish reasoning quality. Improved methods rely on log-prob or self-scoring, which have issues like API support limitations or prompt complexity.

3

Section 03

Core Insight: Intrinsic Stability of Correct Reasoning

Prefix Consistency is based on the observation: correct reasoning paths are more likely to retain their conclusions after truncation and regeneration, while incorrect paths rely on accidental jumps or vague associations and tend to deviate during regeneration. This stability difference forms a natural reliability signal without needing external supervision or internal model probability information.

4

Section 04

Method Implementation: Truncation Resampling and Weighted Voting

Each CoT path is truncated at an intermediate point, and the remaining part is regenerated based on the same context. The prefix consistency score is obtained by comparing the consistency between the original answer and the regenerated answer. During aggregation, weighted voting is performed according to the scores—paths with strict logic receive higher weights. It does not require log-prob or complex prompts and is applicable to all LLMs that support text continuation.

5

Section 05

Experimental Evidence: 21-Fold Leap in Token Efficiency

Validated on 5 reasoning models and 4 math/science benchmarks, Prefix Consistency outperforms log-prob and self-scoring methods. The median token count is only 1/4.6 of traditional majority voting, with a maximum of 1/21, significantly reducing costs while maintaining accuracy.

6

Section 06

Advantages and Application Scenarios

Highly versatile (no need for specific model probability outputs) and easy to use; suitable for complex tasks such as mathematical problem-solving, scientific reasoning, and code generation verification. It is an ideal choice for scenarios with limited budgets but requiring maintained accuracy.

7

Section 07

Limitations and Future Directions

It still needs to generate multiple reasoning paths; some highly deterministic tasks have insufficient regeneration changes, and truncation points require task-specific tuning. Future directions include exploring adaptive truncation strategies, combining with other reliability signals, and extending to multimodal reasoning and generation tasks.