Zing Forum

Reading

Where Reasoning Fails: Step-Saliency Reveals Hidden Breaks in Large Models' Chain of Thought

This article introduces the Step-Saliency method, which analyzes the attention flow in the chain of thought of Large Reasoning Models (LRMs) to identify two key failure modes: shallow locking and deep decay. It also proposes the StepFlow intervention scheme to improve reasoning accuracy without retraining.

大语言模型推理模型思维链注意力机制可解释性Step-SaliencyStepFlow信息流分析
Published 2026-04-08 13:21Recent activity 2026-04-09 10:45Estimated read 4 min
Where Reasoning Fails: Step-Saliency Reveals Hidden Breaks in Large Models' Chain of Thought
1

Section 01

[Introduction] Step-Saliency Reveals Hidden Breaks in Large Models' Chain of Thought and Repair Solutions

This article proposes the Step-Saliency method, which analyzes the attention flow in the chain of thought of Large Reasoning Models (LRMs) to identify two key failure modes: shallow locking and deep decay. It also designs the StepFlow intervention scheme to effectively improve reasoning accuracy without retraining.

2

Section 02

Background: The Black Box Dilemma of Large Models' Chain of Thought

Large Reasoning Models (LRMs) demonstrate strong capabilities in multi-step reasoning tasks, but their chain of thought process is unstable and difficult to interpret. Existing analysis tools struggle to handle long and structured reasoning trajectories, making their internal information flow mechanism a mystery.

3

Section 03

Method: Step-Saliency — Illuminating the Attention Map of the Chain of Thought

Step-Saliency is a technique that integrates attention scores and gradient information. It innovatively aggregates attention at the step level, generates inter-step saliency maps, tracks the complete information flow from problem to thinking to conclusion, and quantifies the impact of each step on subsequent steps.

4

Section 04

Key Findings: Two Modes of Chain of Thought Information Flow Breakage

  1. Shallow Locking: The shallow layers of the model over-focus on the current step, ignoring earlier context and handling subproblems in isolation; 2. Deep Decay: In the later stages of reasoning, the saliency of early steps in the deep layers of the model gradually decays, leading to the forgetting of key earlier deductions.
5

Section 05

Repair Scheme: StepFlow — An Intervention Method Without Retraining

StepFlow consists of two components: Odds-Equal Bridge adjusts the shallow attention distribution to balance the use of historical context; Step Momentum Injection introduces step-level residual connections in deep layers to maintain memory of early steps.

6

Section 06

Experimental Results: Performance Improvement Across Models and Tasks

StepFlow has been validated effective across tasks such as mathematics, science, and programming, as well as various LRM architectures: without retraining, it shows stable performance across models and improved accuracy in multiple tasks.

7

Section 07

Significance: Reconsidering Information Flow in Large Model Reasoning

This study reveals that LRM reasoning has systematic structural defects. Step-Saliency provides a new analysis tool, StepFlow demonstrates a lightweight approach to performance improvement, and it emphasizes the importance of information flow efficiency for model capabilities.

8

Section 08

Future Outlook: Exploring More Reliable AI Reasoning Systems

In the future, we can explore more information flow failure modes, extend StepFlow to other model tasks, design new architectures that fundamentally avoid breaks, and promote the development of more reliable and interpretable AI reasoning systems.