Zing Forum

Reading

SRPO: A New Reinforcement Learning Framework Combining the Advantages of GRPO and SDPO

Researchers propose Sample Routing Policy Optimization (SRPO), which intelligently routes correct and failed samples, combining the stability of GRPO and the fine-grained supervision of SDPO. On Qwen3-8B, it achieves an average performance improvement of 3.4%-6.3% while reducing computational cost by 17.2%.

SRPOGRPOSDPO强化学习大语言模型后训练样本路由策略优化
Published 2026-04-03 01:29Recent activity 2026-04-03 12:49Estimated read 5 min
SRPO: A New Reinforcement Learning Framework Combining the Advantages of GRPO and SDPO
1

Section 01

[Introduction] SRPO: A New Reinforcement Learning Framework Combining the Advantages of GRPO and SDPO

Researchers propose Sample Routing Policy Optimization (SRPO) to address the pain points of existing reinforcement learning post-training methods: the coarse-grained credit assignment of GRPO and the long-term stability issue of SDPO. By intelligently routing correct and failed samples, SRPO combines the stability of GRPO and the fine-grained supervision of SDPO. Experiments show that SRPO achieves an average performance improvement of 3.4%-6.3% on Qwen3-8B while reducing computational cost by 17.2%, providing an efficient new solution for large model post-training.

2

Section 02

Dilemmas of Existing Reinforcement Learning Post-Training Methods

The current mainstream method GRPO has a coarse-grained credit assignment problem: it uniformly penalizes failed samples and cannot locate specific error tokens. The emerging method SDPO achieves fast convergence through logit-level supervision, but has long-term training instability issues, rooted in the optimization ambiguity of self-distillation from correct samples and the degradation of self-teacher signals over time.

3

Section 03

Core Mechanisms and Technical Implementation of SRPO

The core of SRPO is intelligent sample routing: routing correct samples to the GRPO branch (to reinforce correct behaviors) and failed samples to the SDPO branch (to finely correct errors). It also introduces entropy-aware dynamic weighting: assigning high weights to low-entropy (high-confidence) self-distillation signals in failed samples and suppressing high-entropy unreliable signals. In algorithm implementation, it jointly optimizes GRPO loss and weighted SDPO loss while maintaining the on-policy training property.

4

Section 04

Experimental Verification Results of SRPO

On the Qwen3-8B model, SRPO's average score across 5 authoritative benchmark tests (mathematical reasoning, code generation, etc.) is 3.4% higher than GRPO and 6.3% higher than SDPO; computational cost is reduced by 17.2%. Moreover, the advantages are consistent on the Qwen3-32B model, proving cross-scale effectiveness.

5

Section 05

Impact and Insights of SRPO on the Industry

SRPO promotes the evolution of post-training paradigms, improving model capabilities without additional resources; deepens the understanding of sample quality and inspires targeted training strategies; its open-source implementation is expected to become the next-generation mainstream post-training algorithm, driving the improvement of open-source large models.

6

Section 06

Limitations and Future Research Directions of SRPO

SRPO is currently mainly applicable to verifiable reward tasks (such as mathematics, code), and its applicability to open-ended generation tasks remains to be verified; entropy weight calculation brings additional overhead. Future directions include expanding to more task types, exploring finer-grained sample classification, and combining with offline RL and other technologies to further improve performance.