Zing Forum

Reading

ETR: Efficient Chain-of-Thought Reasoning via Entropy Trend Reward

This article introduces the ETR (Entropy Trend Reward) method, which significantly shortens the chain-of-thought length while improving model accuracy by focusing on the uncertainty trajectory during reasoning rather than simply reducing global entropy.

思维链推理CoT优化熵趋势奖励GRPO推理效率大语言模型DeepSeek不确定性轨迹
Published 2026-04-07 10:53Recent activity 2026-04-08 10:21Estimated read 5 min
ETR: Efficient Chain-of-Thought Reasoning via Entropy Trend Reward
1

Section 01

Introduction to ETR: Efficient Chain-of-Thought Reasoning via Entropy Trend Reward

This article introduces the ETR (Entropy Trend Reward) method, whose core insight is that reasoning efficiency depends on the trajectory of entropy change rather than the absolute value of global entropy. Through a trajectory-aware reward mechanism, this method shortens the chain-of-thought length (average reduction of 67%) while improving model accuracy (average increase of 9.9%), providing a new direction for chain-of-thought reasoning optimization. The project code has been open-sourced: https://github.com/Xuan1030/ETR

2

Section 02

Background: Efficiency Dilemma of Chain-of-Thought Reasoning and Limitations of Existing Methods

Although Chain-of-Thought (CoT) reasoning improves the model's ability to handle complex tasks, it has problems of verbosity and inefficiency, increasing latency and computational costs. Existing optimization strategies such as length penalty and global entropy minimization have limitations: the former forces shortening, while the latter suppresses creative thinking in the exploration phase, as they do not consider the uncertainty changes in the dynamic reasoning process.

3

Section 03

ETR Method: Design of Trajectory-Aware Entropy Trend Reward

Based on the insight that 'the trajectory of entropy change determines efficiency', ETR designs a trajectory-aware reward mechanism: 1. Monitor the predicted entropy at each reasoning step to form a trajectory; 2. Quantify the trend of the entropy sequence; 3. Give positive rewards to trajectories with a downward entropy trend; 4. Integrate into the GRPO framework to optimize the strategy. The technical highlights include preserving exploratory behavior, identifying efficient reasoning patterns, and adapting to different task characteristics.

4

Section 04

Experimental Verification: Dual Improvement of Accuracy and Efficiency by ETR

Taking the DeepSeek-R1-Distill-7B model as an example, in multiple benchmark tests: the accuracy increased by an average of 9.9%, the chain-of-thought length decreased by an average of 67%, and the efficiency-accuracy trade-off was better than the baseline. Moreover, this method performs consistently across models of different architectures/scales, showing strong generalization.

5

Section 05

Practical Significance and Application Prospects of ETR

ETR can significantly reduce reasoning costs (saving computing resources), improve user experience (more concise and readable reasoning process), and open up new directions for efficient reasoning research (such as the expansion of fine-grained trajectory modeling).

6

Section 06

Conclusion: Breakthroughs and Future Insights of ETR

ETR achieves a breakthrough in chain-of-thought reasoning by focusing on the entropy evolution trajectory, which suggests that optimizing complex AI systems requires attention to the characteristics of dynamic processes. As large model applications become widespread, such efficiency optimization technologies will promote the democratization of AI. Code open-source link: https://github.com/Xuan1030/ETR