Zing Forum

Reading

LARFT: Bridging the Gap Between Length Cognition and Generation Behavior in Large Language Models

LARFT uses length-aware reinforcement fine-tuning technology to enable large models to truly understand and execute length-constrained instructions, achieving an average improvement of 20.92 points in length control tasks while keeping general capabilities almost unchanged.

大语言模型长度控制强化学习指令遵循LLM微调认知-行为对齐
Published 2026-05-11 15:43Recent activity 2026-05-11 15:47Estimated read 5 min
LARFT: Bridging the Gap Between Length Cognition and Generation Behavior in Large Language Models
1

Section 01

【Introduction】LARFT: Bridging the Gap Between Length Cognition and Generation Behavior in Large Language Models

LARFT (Length-Aware Reinforcement Fine-Tuning) uses length-aware reinforcement fine-tuning technology to address the "cognition-behavior gap" problem in large language models for length control tasks. This method enables models to truly understand and execute length-constrained instructions, achieving an average improvement of 20.92 points in length control tasks while keeping general capabilities almost unchanged.

2

Section 02

Background: Pain Points of Length Control in Large Models and Limitations of Traditional Methods

Large language models perform well in complex instruction-following tasks, but when faced with the need to precisely control output length, they often have problems of over-simplification or verbosity, forming a "cognition-behavior gap". Traditional methods force length constraints through external signals or optimization objectives, but ignore the fundamental problem that models lack inherent cognitive ability for "length".

3

Section 03

Method: Core Innovations and Technical Architecture of LARFT

The core innovation of LARFT is the hindsight length awareness task, which allows the model to learn to recognize the actual length of the text it generates, achieving the dual goals of optimizing length representation at the cognitive level and refining strategies at the behavioral level. The technical architecture is based on a modified verl framework, including: 1. A unified loss function (combining SFT loss and length-aware reinforcement learning); 2. Cosine scheduling strategy to dynamically adjust learning rate; 3. Custom length reward function; 4. Specific training configurations (such as batch size 128, learning rate 1e-6, etc.).

4

Section 04

Experimental Evidence: Significant Improvement in Length Control Performance and Preservation of General Capabilities

Experiments on 4 base models show that LARFT achieves an average improvement of +20.92 points on three length instruction-following benchmarks, significantly outperforming existing baselines. At the same time, it only decreases by -1.45 points on four general capability benchmarks, achieving a "win-win" situation where length control ability is enhanced and general capabilities remain almost unchanged.

5

Section 05

Practical Applications: Open-Source Solution and Applicable Scenarios

LARFT provides an open-source implementation, including a complete training pipeline: supports rapid sample generation or custom dataset conversion, flexible hyperparameter configuration, and multi-card training (e.g., 8x A800). It is suitable for scenarios requiring precise output length control, such as summary generation, social media content, academic writing assistance, etc.

6

Section 06

Conclusion and Outlook: Insights from Cognition-Behavior Alignment and Future Directions

The success of LARFT reveals that enabling models to understand the essence of tasks (cognition) is more effective than simply optimizing behavior. This "cognition-behavior alignment" approach can be extended to control other generation attributes such as style consistency and emotional intensity. As large model applications deepen, fine control of generated content will become more important, and LARFT provides a technical reference for this.