Zing Forum

Reading

FIPO: Unleashing Deep Reasoning Capabilities of Large Models via Future-Aware KL Divergence

This article introduces FIPO (Future-KL Influenced Policy Optimization), a reinforcement learning method without a value model. Through a fine-grained token-level credit assignment mechanism, it extends the chain-of-thought length from 4000 to over 10000 tokens, achieving 58% accuracy in AIME 2024, surpassing DAPO and o1-mini.

FIPO强化学习思维链GRPODAPO推理优化大模型训练QwenAIME信用分配
Published 2026-04-07 16:44Recent activity 2026-04-07 16:51Estimated read 7 min
FIPO: Unleashing Deep Reasoning Capabilities of Large Models via Future-Aware KL Divergence
1

Section 01

[Introduction] FIPO: A New Pure Reinforcement Learning Method Breaking the Reasoning Length Bottleneck of Large Models

FIPO (Future-KL Influenced Policy Optimization), open-sourced by Alibaba Tongyi Lab, is a value-free reinforcement learning method. It extends the chain-of-thought length to over 10000 tokens via a fine-grained token-level credit assignment mechanism, achieving 58% accuracy in AIME 2024—surpassing DAPO and o1-mini—and opening a new path for training large models' reasoning capabilities using pure RL.

2

Section 02

Background: The '4000-token Bottleneck' of Traditional Reasoning Methods

Mainstream pure RL reasoning methods like GRPO and DAPO can stimulate models' reasoning abilities, but their reasoning length often stagnates around 4000 tokens, limiting their ability to handle complex problems (e.g., math competition questions, logical reasoning). How to enable models to autonomously expand reasoning depth has become an urgent problem to solve.

3

Section 03

Core of FIPO: Future-Aware Token-Level Credit Assignment Mechanism

FIPO breaks the bottleneck through refined token-level credit assignment, with core steps including:

  1. Local Signal: Calculate the log probability shift (Δlog p_t) between the current and old policies to capture the direction of strategy change for individual tokens;
  2. Future-Aware Accumulation: Discount and accumulate future trajectory signals (FutureKL_t) to reflect the token's impact on long-term reasoning;
  3. Influence Weighting: Map future signals to bounded weights and adjust the original advantage to guide effective reasoning;
  4. Loss Function: Use a PPO/DAPO form based on future-aware advantages to ensure training stability and simplicity.
4

Section 04

Experimental Evidence: Dual Breakthroughs in Performance and Reasoning Length

Experimental results on Qwen2.5-32B-Base show:

  • Leading Performance: AIME 2024 Pass@1 reaches 58% (peak)/56% (converged), surpassing DAPO (50%), DeepSeek-R1-Zero (47%), and o1-mini (55%);
  • Length Expansion: The average reasoning length exceeds 10000+ tokens, and the extra tokens are used for effective reasoning such as self-reflection and re-derivation instead of redundancy;
  • Training Dynamics: The length distribution of FIPO continues to expand, with a strong positive correlation between accuracy and length, while DAPO's length stagnates in the 4k range.
5

Section 05

Technical Implementation: Architecture and Parameters Based on VeRL/DAPO

FIPO is based on the VeRL framework and DAPO recipe, with key adjustments including:

  • Architecture Parameters: actor_rollout_ref.actor.ppo_mini_batch_size increased from 32 to 64, loss_mode switched to future_kl;
  • Hyperparameters: Discount factor γ (0.95-0.99), influence weight clipping boundaries (ε_f,low=0.1/ε_f,high=0.2), safety threshold of 5.0;
  • Launch Method: Reuse the DAPO launcher structure; execute bash recipe/fipo/run_fipo_qwen2.5_32b.sh to start.
6

Section 06

Method Comparison: Summary of FIPO's Core Advantages

Compared with traditional methods, FIPO has significant advantages:

Dimension DAPO FIPO
Credit Assignment Sequence-level uniform advantage Token-level future-aware advantage
Length Growth Stagnates after 4k Continues to expand to 10k+
Training Stability Good Maintained stable via clipping

Core Advantages: Pure RL training without a value model, fine-grained token-level signals, scalable length, extra length converted into effective reasoning, simple and reproducible implementation.

7

Section 07

Application Prospects: Potential of Pure RL Reasoning and Open-Source Value

The applications and significance of FIPO include:

  • Research Insights: Proving the potential of pure RL on clean base models, providing a training-stage solution for inference-time computation expansion;
  • Practical Applications: Math competitions (complex proofs), code generation (iterative corrections), scientific research (hypothesis derivation), educational tutoring (detailed problem-solving);
  • Open-Source Contribution: Publishing code, model weights, and scripts, based on a mature architecture for easy reproduction and improvement.
8

Section 08

Conclusion: FIPO Opens a New Path for Deep Reasoning

FIPO achieves fine-grained credit assignment via future-aware KL divergence, breaking the reasoning length bottleneck and increasing AIME accuracy to 58%, with extra length converted into effective reasoning like self-reflection. It provides a feasible path for improving large models' reasoning capabilities without relying on manual long chain-of-thought annotations. As inference-time computation expansion becomes a trend, FIPO's training optimization method will play an important role.