Zing Forum

Reading

CAGE Framework: Using Attribution Graphs to Explain the Reasoning Process of Large Language Models

This article introduces the CAGE (Context Attribution via Graph Explanations) framework, a new method for explaining the reasoning process of large language models by constructing attribution graphs, which improves fidelity by up to 40% compared to traditional methods.

LLM可解释性归因方法CAGE大语言模型推理解释Attribution GraphsAI安全
Published 2026-05-15 23:35Recent activity 2026-05-16 00:18Estimated read 5 min
CAGE Framework: Using Attribution Graphs to Explain the Reasoning Process of Large Language Models
1

Section 01

CAGE Framework: Using Attribution Graphs to Explain the Reasoning Process of Large Language Models

This article introduces the CAGE (Context Attribution via Graph Explanations) framework, a new method for explaining the reasoning process of large language models by constructing attribution graphs. Compared to traditional methods, CAGE improves fidelity by up to 40%, effectively addressing the flaw of existing context attribution methods that ignore the mutual influence between generated tokens, and providing a new path for LLM interpretability research.

2

Section 02

Background: The Black Box Dilemma of Large Model Reasoning and Limitations of Existing Attribution Methods

Large language models (LLMs) are powerful but their reasoning processes are opaque, raising safety and trust issues. Attribution methods in computer vision have inspired the direction of context attribution, but existing methods directly link generated tokens to prompts and ignore the mutual influence between tokens, leading to incomplete explanations.

3

Section 03

Core of the CAGE Framework: Innovative Design of Attribution Graphs

The CAGE framework introduces an attribution graph (a directed graph structure) to quantify the influence of prompts and previously generated content on each generation step. The attribution graph must satisfy two key properties: causality (true causal relationship) and row stochasticity (sum of edge weights equals 1), and accurate attribution values are calculated by marginalizing intermediate contributions.

4

Section 04

Technical Implementation: Modules and Usage Flow of the CAGE Framework

The open-source implementation includes llm_attr.py (various attribution methods), CAGE attribution calculation (graph structure attribution and DAG visualization), and evaluation datasets. The usage flow includes model preparation, initializing the evaluator, loading data, generating attributions, and result analysis; examples can be found in example.ipynb.

5

Section 05

Experimental Evidence: Significant Fidelity Improvement of the CAGE Framework

Evaluated across multiple models, datasets, and metrics, CAGE improves average fidelity by up to 40% compared to existing methods. This is attributed to the attribution graph's complete modeling of dependencies between tokens, which traditional methods ignore.

6

Section 06

Practical Value of CAGE: Model Debugging and Applications in High-Risk Scenarios

For developers, it can be used to identify over-reliance on/ignorance of inputs, detect biases, and verify reasoning logic; in high-risk fields such as healthcare, law, and finance, it enhances the credibility of explanations, facilitating compliance and human-machine collaboration; it also inspires future directions such as multimodal expansion, combination with chain-of-thought, and interactive visualization.

7

Section 07

Summary and Outlook: Contributions and Future Development of the CAGE Framework

By incorporating the influence between tokens via attribution graphs, CAGE opens a new path for LLM interpretability. The 40% fidelity improvement is a new starting point; future work needs to deepen the understanding of large model reasoning, and CAGE provides the theoretical foundation and tools. Readers can visit the project repository or the arXiv preprint (arXiv:2512.15663) for more details.