Zing Forum

Reading

SmartThinker: Progressive Chain-of-Thought Length Calibration for More Efficient Large Model Reasoning

The SmartThinker method proposed by the Shanghai Jiao Tong University team achieves up to 52.5% output compression while maintaining reasoning accuracy through dynamic chain-of-thought length calibration, and has been accepted by ICML 2026.

思维链优化推理效率GRPO大模型推理ICML2026上海交通大学长度校准强化学习
Published 2026-05-03 02:36Recent activity 2026-05-03 02:53Estimated read 5 min
SmartThinker: Progressive Chain-of-Thought Length Calibration for More Efficient Large Model Reasoning
1

Section 01

SmartThinker: Progressive Chain-of-Thought Length Calibration for Win-Win of Large Model Reasoning Efficiency and Accuracy

The Shanghai Jiao Tong University team proposed the SmartThinker method, which achieves up to 52.5% output compression while maintaining reasoning accuracy through dynamic chain-of-thought length calibration. This research has been accepted by ICML 2026. This article will discuss it from aspects such as background, methodology, experiments, and impacts.

2

Section 02

Efficiency Dilemma of Large Reasoning Models and Limitations of Existing Solutions

In recent years, large reasoning models (LRMs) such as OpenAI o1 and DeepSeek-R1 rely on long Chain-of-Thought (CoT) to improve performance on complex tasks, but long CoT brings problems like redundancy, soaring reasoning costs, and response delays. The existing GRPO method uses static length rewards, which cannot dynamically adapt to problem difficulty and easily leads to over-compression or under-compression.

3

Section 03

Two Core Innovations of SmartThinker

The core of SmartThinker lies in: 1. Dynamic optimal length estimation: Estimate the optimal reasoning chain length for each problem during training, guiding the model to approach the critical point; 2. Dynamic reward coefficient modulation: Avoid improperly penalizing correct but longer reasoning paths, enabling the model to learn the decision of 'being long when needed and short when appropriate'.

4

Section 04

Experimental Results Verify Win-Win of Efficiency and Accuracy

In multiple benchmark tests, SmartThinker achieves an average of 52.5% output compression while maintaining or even improving accuracy; on high-difficulty tasks like AIME25, the accuracy improvement reaches up to 16.6%. Moderate length constraints prompt the model to focus on key steps and avoid invalid loops.

5

Section 05

Open-Source Implementation and Usage Guide of SmartThinker

The team has open-sourced the training/testing code and pre-trained models with 1.5B and 4B parameters, built based on Python 3.12, PyTorch 2.8.0, etc. The usage process includes environment preparation, data preprocessing, Wandb configuration, training, model conversion, and effect verification, with complete scripts provided to lower the threshold for reproduction.

6

Section 06

Technical Breakthroughs and Industrial Value of SmartThinker

This method marks that reasoning model optimization has entered a new stage (from extending CoT to dynamic compression). Its industrial value includes: cost savings (in token-based billing scenarios), latency improvement, adaptation to edge deployment, and reduction of energy consumption and carbon footprint.

7

Section 07

Future Directions and Conclusion

In the future, it can be extended to more model architectures, combined with distillation technology, and explore online learning and multi-modal scenario optimization. SmartThinker achieves a win-win of efficiency and accuracy through fine reward design, and is expected to become one of the standard practices for reasoning model deployment.