Zing Forum

Reading

Uncertainty Quantification for Large Reasoning Models: A New Approach Combining Conformal Prediction and Shapley Values

This paper proposes a new method that combines conformal prediction and the Shapley value framework to provide statistically guaranteed uncertainty quantification for large reasoning models and explain the sources of uncertainty.

大型推理模型不确定性量化共形预测Shapley值可解释AI统计保证
Published 2026-04-15 09:53Recent activity 2026-04-16 10:54Estimated read 4 min
Uncertainty Quantification for Large Reasoning Models: A New Approach Combining Conformal Prediction and Shapley Values
1

Section 01

[Introduction] New Method for Uncertainty Quantification of Large Reasoning Models: Conformal Prediction + Shapley Values

This paper addresses the dilemma of uncertainty quantification for large reasoning models and proposes a new method combining conformal prediction and Shapley values. It not only provides statistically guaranteed uncertainty quantification for models but also explains the sources of uncertainty, which is of great significance for the safe deployment of AI systems.

2

Section 02

Problem Background and Limitations of Existing Methods

Problem Background

Large Reasoning Models (LRMs) have made significant progress in complex reasoning tasks, but quantifying the uncertainty of their generated content remains a key challenge. Traditional methods cannot provide finite-sample guarantees, making it difficult to trust model outputs in practical applications.

Limitations of Existing Methods

Conformal Prediction (CP) can construct statistically rigorous uncertainty sets, but it has two major issues: 1. It ignores the logical connection between reasoning trajectories and answers; 2. It cannot explain the sources of uncertainty. In addition, distinguishing between reasoning quality and answer correctness, and establishing theoretical guarantees for efficient explanation methods are also highly challenging.

3

Section 03

Core Contribution 1: Uncertainty Quantification of Reasoning-Answer Structure

The first stage of the new method performs uncertainty quantification for the reasoning-answer structure and provides statistical guarantees. This method not only focuses on the final answer but also values the reliability of the entire reasoning process.

4

Section 04

Core Contribution 2: Unified Explanation Framework Based on Shapley Values

The second stage develops a unified explanation framework from examples to steps, using Shapley values to identify:

  1. Key subsets of training examples (which training data are crucial for current reasoning);
  2. Key reasoning steps (which steps in the reasoning process are indispensable for ensuring coverage).
5

Section 05

Theoretical Analysis and Experimental Validation

The study conducts a detailed theoretical analysis of the proposed method and carries out extensive experiments on multiple challenging reasoning datasets. The results show that this method can effectively quantify uncertainty while providing interpretability.

6

Section 06

Practical Significance and Application Prospects

Practical Significance

  • Reliability assessment: Helps users determine when to trust model reasoning results;
  • Error diagnosis: Assists developers in locating the root cause of problems by identifying key training examples and steps;
  • Model improvement: Provides guidance for targeted optimization.

Application Prospects

In the future, it is expected to be extended to more types of reasoning tasks, laying the foundation for building more trustworthy AI systems.