Zing Forum

Reading

WISE: Enabling Multimodal Models to 'Learn Thick First, Then Thin'—Achieves SOTA Even With 5x Compression in Reasoning Length

WISE uses a training structure of 'Concise Reason → Answer → Detailed Explanation' and a self-distillation objective to enable models to compress detailed reasoning into a compact form. On ReasonSeg, it achieves 58.3 cIoU while reducing reasoning tokens from 112 to 23.

多模态CoT推理WISE思维压缩语言分割ReasonSeg自蒸馏高效推理大模型优化
Published 2026-04-02 21:45Recent activity 2026-04-03 09:20Estimated read 5 min
WISE: Enabling Multimodal Models to 'Learn Thick First, Then Thin'—Achieves SOTA Even With 5x Compression in Reasoning Length
1

Section 01

[Introduction] WISE: Enabling Multimodal Models to 'Learn Thick First, Then Thin'—Achieves SOTA Even With 5x Reasoning Compression

WISE uses a training structure of 'Concise Reason → Answer → Detailed Explanation' and a self-distillation objective to guide models to compress detailed reasoning into a compact form. On the ReasonSeg benchmark, WISE-S achieves a SOTA result of 58.3 cIoU, while reducing the number of reasoning tokens from 112 to 23 (a compression ratio of nearly 5x), achieving a win-win of quality and efficiency.

2

Section 02

Background: Cost Bottleneck of Chain-of-Thought Reasoning

Chain-of-thought (CoT) reasoning has improved the multimodal capabilities of large models, but the lengthy reasoning process brings computational resource and time costs, which become a deployment bottleneck in real-time applications or high-concurrency scenarios. The ideal state is for models to maintain deep reasoning capabilities while expressing reasoning in a concise and efficient way.

3

Section 03

Core of WISE: Three-Stage Training Structure of 'Learn Thick, Use Thin'

The core concept of WISE is 'learn thick, use thin' (thinking twice -- once for learning, once for speed). The training uses a three-stage structure of 'Concise Reason → Answer → Detailed Explanation', and leverages an autoregressive mechanism to force the concise reason to contain sufficient information to support the generation of subsequent detailed explanations.

4

Section 04

Method Details: Self-Distillation and Concise Strategy During Reasoning

WISE introduces a self-distillation training objective, rewarding both semantic fidelity (semantic equivalence between concise reason and detailed explanation) and conciseness (expressing complete reasoning with fewer tokens). During reasoning, the WISE-S strategy is used: by injecting conciseness prompts to omit detailed explanations, the distribution shift problem between training and reasoning is solved.

5

Section 05

Experimental Evidence: Dual Breakthroughs in Quality and Efficiency

Under the ReasonSeg zero-shot setting, WISE-S achieves a SOTA accuracy of 58.3 cIoU; reasoning tokens are reduced from 112 to 23, achieving nearly 5x compression. The results prove that compressed reasoning does not sacrifice accuracy, challenging the assumption that 'more detailed reasoning is necessarily better'.

6

Section 06

Technical Implementation: Key Points of WISE's Training Process

The WISE training process includes: data preparation (reusing existing CoT training data without additional annotation), sequence formatting (three-stage structure), loss function design (balancing language modeling loss and distillation loss), and reasoning optimization (short sequences reduce decoding steps to improve latency).

7

Section 07

Implications and Outlook: Application Potential and Future Directions of WISE

WISE provides a new paradigm for efficient multimodal reasoning and can be applied to scenarios such as visual question answering, document understanding, and interactive applications. Limitations include task specificity and trade-offs in interpretability; future directions need to verify generalization ability, combine model distillation, and explore dynamic reasoning length adjustment mechanisms.