Zing Forum

Reading

CAL-GRPO: Calibrated Reinforcement Learning Enables Large Models to "Learn by Trial and Error"

CAL-GRPO addresses the gradient bias problem in multi-turn chain-of-thought reasoning through an innovative attempt-level calibration strategy, enabling models to accumulate experience and improve incrementally through multiple attempts, significantly enhancing their ability to solve complex tasks.

强化学习思维链推理多轮尝试GRPO模型校准试错学习Verification@K
Published 2026-04-20 15:42Recent activity 2026-04-21 13:51Estimated read 6 min
CAL-GRPO: Calibrated Reinforcement Learning Enables Large Models to "Learn by Trial and Error"
1

Section 01

[Introduction] CAL-GRPO: Calibrated Reinforcement Learning for Large Models to Learn by Trial and Error

CAL-GRPO addresses the gradient bias problem in multi-turn chain-of-thought reasoning through an innovative attempt-level calibration strategy, enabling models to accumulate experience and improve incrementally through multiple attempts, significantly enhancing their ability to solve complex tasks. This article explores equipping large language models with multi-round iterative improvement capabilities: the model can make up to K consecutive attempts, each time building a better solution based on previous failure experiences and feedback from a hard verifier.

2

Section 02

Background: A Shift from One-Time Success to Multi-Round Trial and Error

Current advanced reasoning models (e.g., OpenAI o-series, DeepSeek-R1) use long chain-of-thought technology, but they implicitly assume that "the first attempt must be perfect", which contrasts with the way humans solve problems through trial and error (mathematicians repeatedly refine theorems, programmers debug code). The goal of this article is to enable models to have multi-round iterative improvement capabilities, optimizing solutions based on historical failure experiences and verification feedback.

3

Section 03

Framework Design for Multi-Round Attempt Reasoning

Model reasoning is defined as consecutive attempts: each attempt generates a complete chain of thought and answer, which is judged correct or incorrect by an external hard verifier; subsequent attempts can access all history (error paths + feedback) to identify error patterns, inherit valid fragments, adjust strategies, and gradually converge. The training objective is Verification@K—the probability of at least one success in the first K attempts.

4

Section 04

CAL-GRPO: Calibrated Reinforcement Learning to Resolve Gradient Bias

The naive weighting strategy (positive weights for successful attempts, negative/zero weights for failed ones) has selection bias, ignoring the contribution of previous failures to success, leading to biased gradient estimation. CAL-GRPO achieves unbiased gradient estimation through an attempt-level calibration factor: the weight considers both its own success/failure and the marginal contribution to subsequent successes. Mathematically, Verification@K is decomposed into a product of conditional probabilities to quantify the gradient contribution of each attempt; it extends GRPO and retains the advantage of sample efficiency.

5

Section 05

Experimental Validation: Effectiveness Advantages of CAL-GRPO

Synthetic tasks: CAL-GRPO significantly outperforms naive weighting and standard GRPO with more stable convergence; Real tasks (math GSM8K/MATH, code HumanEval): The Verification@K metric surpasses baselines, with high learning efficiency and robust generalization; Ablation experiments: Removing the calibration factor leads to performance degradation, indicating that modeling attempt dependencies is key.

6

Section 06

Implications for AI Reasoning Research

Opens up a new path for test-time computation (multi-round iterative cumulative computation vs. single long chain); Suitable for human-machine collaboration (human feedback can serve as a verification signal); Has self-improvement potential (actively generate tests, verify outputs, adjust strategies).

7

Section 07

Limitations and Future Directions

Limitations: Dependence on hard verifiers (difficult to obtain for open-ended tasks), fixed K value is not optimal, high computational overhead; Future directions: Integrate soft verifiers, develop adaptive attempt termination strategies, apply to multi-modal reasoning, and be compatible with model distillation/quantization.