Zing Forum

Reading

AVR: Adaptive Reasoning Path Learning Framework for Efficient Visual Reasoning

AVR decomposes visual reasoning into three cognitive functions—perception, logical reasoning, and answer application—allowing the model to dynamically select the simplest response format. It reduces token usage by 50-90% while maintaining accuracy, effectively alleviating overthinking in visual reasoning models.

视觉推理自适应推理过度思考效率优化多模态模型
Published 2026-04-16 10:59Recent activity 2026-04-17 10:29Estimated read 7 min
AVR: Adaptive Reasoning Path Learning Framework for Efficient Visual Reasoning
1

Section 01

Introduction to the AVR Framework: Adaptive Path Learning to Alleviate Overthinking in Visual Reasoning

AVR (Adaptive Reasoning Path Learning Framework for Efficient Visual Reasoning) decomposes visual reasoning into three cognitive functions—perception, logical reasoning, and answer application—allowing the model to dynamically select the simplest response format. It reduces token usage by 50-90% while maintaining accuracy, effectively addressing the overthinking problem in visual reasoning models.

2

Section 02

The Dilemma of Overthinking in Visual Reasoning

While expanding AI's visual understanding capabilities, Visual Reasoning Models (VRMs) face significant efficiency issues: overthinking. This manifests as generating lengthy reasoning chains for all tasks, regardless of problem complexity. Its harms include increased reasoning latency, higher computational costs, degraded user experience, and risk of error accumulation. The root cause lies in redundant reasoning paths and the model's lack of ability to adjust the reasoning process based on the problem.

3

Section 03

Cognitive Function Decomposition: Three Core Layers of Visual Reasoning

AVR decomposes visual reasoning into three core functions:

  1. Visual Perception: Extracts image information (recognizes objects, spatial relationships, etc.), suitable for simple factual questions;
  2. Logical Reasoning: Handles derivation needs like mathematical calculations and causal inference, suitable for complex questions;
  3. Answer Application: Integrates results to output the final answer. Key insight: Different questions have different requirements for the three layers. Current VRMs uniformly use the complete process, leading to efficiency waste.
4

Section 04

Adaptive Response Format: Dynamically Selecting Reasoning Depth

AVR introduces three response formats, and the model dynamically selects based on problem characteristics:

  1. Full Format: Includes detailed perception, reasoning, and answer derivation, suitable for complex questions;
  2. Perception-Only Format: Only outputs perception results, suitable for direct observation questions like spatial understanding;
  3. Direct Answer: No intermediate reasoning, suitable for highly direct or high-confidence questions. The dynamic selection of formats by the model is the core innovation.
5

Section 05

FS-GRPO Training: Balancing Efficiency and Correctness

AVR adopts the FS-GRPO training method:

  • Dual Objectives: Correctness reward (to maintain answer accuracy) + efficiency reward (to encourage more concise formats);
  • Group Relative Policy Optimization: Generate multiple candidate responses for each sample, update the strategy based on the group's correctness and efficiency performance, guiding the model to select the simplest correct format.
6

Section 06

Experimental Evaluation: Significant Efficiency Improvement While Maintaining Accuracy

AVR's performance in multiple benchmark tests:

  • Token Usage Reduced by 50-90%: The most significant improvement in perception-intensive tasks, with obvious optimization in reasoning tasks;
  • Accuracy Maintained: No sacrifice in answer accuracy, verifying the redundancy of overthinking;
  • Format Selection Patterns: Simple factual questions use direct answers, spatial questions use perception-only format, mathematical/logical questions use full format.
7

Section 07

Insights and Future Directions

Insights:

  1. Decoupling Efficiency and Capability: Lengthy reasoning ≠ strong ability; metacognition (knowing when to go deep) is more important;
  2. Advantages of Layered Architecture: Separation of perception, reasoning, and application is beneficial for modularity and interpretability;
  3. Generalization Potential: The adaptive mechanism can be applied to text reasoning, robot decision-making, and other fields. Limitations and Future Work: The format division is rough and needs finer granularity; the format decision mechanism can be optimized; future exploration includes continuous reasoning adjustment, combining with acceleration technologies, and extending to video reasoning scenarios.
8

Section 08

Conclusion: The Value of Adaptive Reasoning

AVR significantly reduces token usage while maintaining accuracy through cognitive function decomposition and dynamic format selection, effectively alleviating overthinking. Its contribution is not only a technical method but also demonstrates the core of intelligent efficiency: knowing when to stop rather than going deep without limit. This adaptive layered strategy is expected to become a standard paradigm in visual AI and other fields.