Zing Forum

Reading

Faithful GRPO: Enhancing Visual Spatial Reasoning Credibility of Multimodal Models via Constrained Policy Optimization

This article introduces Faithful GRPO (FGRPO), a GRPO variant that enforces logical consistency and visual grounding constraints via Lagrangian dual ascent, reducing the reasoning inconsistency rate from 24.5% to 1.7%.

多模态推理GRPO视觉空间推理思维链约束优化可解释AI
Published 2026-04-10 01:15Recent activity 2026-04-10 10:45Estimated read 5 min
Faithful GRPO: Enhancing Visual Spatial Reasoning Credibility of Multimodal Models via Constrained Policy Optimization
1

Section 01

[Main Floor] Faithful GRPO: A New Method to Enhance Visual Spatial Reasoning Credibility of Multimodal Models

This article introduces Faithful GRPO (FGRPO), a constrained policy optimization method addressing the credibility issue of visual spatial reasoning in multimodal models. Current multimodal reasoning models face problems such as logical inconsistency between the chain of thought and the answer, and lack of faithful reference to visual evidence in reasoning. FGRPO enforces logical consistency and visual grounding constraints via Lagrangian dual ascent, reducing the reasoning inconsistency rate from 24.5% to 1.7% while improving answer accuracy.

2

Section 02

[Background] Hidden Issues of Multimodal Reasoning Models: Decline in Reasoning Quality Behind Accuracy Improvements

Reinforcement Learning with Verifiable Rewards (RLVR) is the mainstream paradigm for training multimodal reasoning models. While GRPO optimization improves accuracy, it comes with hidden costs: a decline in reasoning quality, manifested as logical inconsistency (contradiction between chain of thought and answer) and lack of visual grounding (reasoning descriptions not matching images). The research team found this problem is prevalent through seven real-world spatial reasoning benchmark tests and proposed a dual-dimensional evaluation framework for logical consistency and visual grounding.

3

Section 03

[Methodology] Constrained Optimization Scheme of Faithful GRPO

FGRPO is a variant of GRPO, with the core of incorporating reasoning quality constraints into the optimization objective. Specifically, two types of batch-level constraints are introduced: consistency constraints (penalizing logical contradictions between chain of thought and answer) and grounding constraints (penalizing reasoning descriptions that do not match visual evidence). The Lagrangian dual ascent method is used to dynamically adjust constraint weights: low weights in the early training stage allow learning basic structures; weights are increased in the later stage to force quality improvement, avoiding training collapse or weak constraints.

4

Section 04

[Experimental Validation] Significant Improvements of FGRPO on Qwen2.5-VL

Testing on seven spatial reasoning datasets with Qwen2.5-VL-7B and 3B models yielded significant results: the inconsistency rate dropped from 24.5% to 1.7%, grounding scores increased by 13%, and answer accuracy improved simultaneously. The effect was verified on both model scales, showing good generalization.

5

Section 05

[Insights] Significance of FGRPO for Building Trustworthy AI Systems

FGRPO provides methodological insights for trustworthy AI: 1. Process supervision is more effective than focusing only on results in achieving reliable model behavior; 2. Lagrangian dual ascent is an effective way to incorporate complex constraints in reinforcement learning; 3. Interpretability and performance can be synergistic, and faithful reasoning leads to more accurate conclusions.

6

Section 06

[Conclusion] Progress and Future Directions of FGRPO

FGRPO is an important advancement in multimodal reasoning training methods, enhancing reasoning credibility through explicit constraint optimization while maintaining or improving accuracy. As multimodal AI is applied in high-risk fields, reasoning credibility will become a key indicator, and FGRPO's constraint optimization paradigm lays the foundation for future research.