Zing Forum

Reading

LLM Decision Reasoning Recognition: Decoding the Causes of Human Decisions from Verbal Reports

Studies show that large language models (LLMs) can accurately identify decision-making reasons in verbal reports, providing a new research path for understanding human decision-making processes and developing explainable AI.

大型语言模型决策科学口语报告分析可解释AI自然语言处理认知心理学行为研究
Published 2026-03-31 23:45Recent activity 2026-03-31 23:58Estimated read 6 min
LLM Decision Reasoning Recognition: Decoding the Causes of Human Decisions from Verbal Reports
1

Section 01

[Introduction] LLM Decision Reasoning Recognition: A New Path to Decoding Human Decision-Making Reasons

This article explores how to use large language models (LLMs) to analyze human verbal reports and automatically identify decision-making reasons. Traditional decision-making research methods (such as post-hoc questionnaires and laboratory tasks) struggle to capture the complexity of decision-making processes, while LLMs, with their strong language understanding capabilities, provide an innovative direction for understanding human decision-making and developing explainable AI. The core finding of the study is that the accuracy of LLMs in identifying decision-making reasons is comparable to that of human experts, and even superior in terms of consistency, efficiency, and other aspects.

2

Section 02

Research Background: Methodological Challenges in Decision Science

Understanding decision-making processes is crucial for psychology, economics, neuroscience, and AI fields, but traditional methods have limitations:

  • Choice tasks: Only observe results, cannot understand the process;
  • Verbal reports: Rich data but time-consuming and subjective to analyze;
  • Eye-tracking/neuroimaging: Expensive, invasive, and the mapping between signals and psychological processes is unclear. In addition, decision-making processes are mostly implicit, and people often struggle to accurately describe the reasons for their decisions.
3

Section 03

Research Innovation: LLM-Driven Design for Decision Reason Identification

The core innovation of the study is the use of LLMs to automatically analyze decision-making verbal reports. The design steps are as follows:

  1. Participants complete a choice task and give a verbal report;
  2. The report is transcribed into text;
  3. The LLM analyzes the text to identify decision-making reasons;
  4. Results are compared with annotations from human experts. At the same time, a classification system for decision-making reasons was established, including categories based on attributes, comparisons, emotions, rules, external factors, etc.
4

Section 04

Methods for LLM Analysis of Decision-Making Reasons

The study used multiple LLM analysis methods:

  • Zero-shot classification: Directly provide text and categories, relying on general knowledge and prompt engineering;
  • Few-shot learning: Help the LLM understand the task through examples;
  • Chain-of-thought prompting: Require the LLM to analyze step-by-step and explain the identification reasons;
  • Fine-tuning: Optimize the model with annotated data, requiring more resources but achieving better performance.
5

Section 05

Research Findings: Accuracy and Advantages of LLM Identification

Key findings: The performance of LLMs in identifying decision-making reasons is comparable to, or even exceeds, that of human experts.

  • Accuracy metrics: Include precision (fewer false positives), recall (fewer false negatives), and F1 score (comprehensive evaluation);
  • Comparison with humans: LLMs have high annotation consistency, fast efficiency, strong objectivity, and can capture subtle language differences (such as hints and euphemisms).
6

Section 06

Research Significance: Cross-Domain Impacts and Implications

The research has broad significance:

  • Decision science: Enable large-scale data collection, real-time process tracking, and cross-cultural comparisons;
  • AI development: Facilitate the construction of explainable AI, optimization of human-machine collaboration, and preference learning;
  • NLP applications: Promote progress in fields such as fine-grained sentiment analysis and dialogue understanding.
7

Section 07

Limitations and Future Research Directions

The study has limitations: reliance on the quality of verbal reports, difficulty in inferring causal mechanisms, need to verify cross-cultural applicability, and involvement of privacy ethics. Future directions:

  • Multimodal analysis (combining data such as voice and facial expressions);
  • Development of causal inference methods;
  • Personalized model training;
  • Construction of real-time decision intervention systems.