# Age Bias in Large Language Reasoning Models: The Impact of Chain-of-Thought Revealed by XSTest Benchmark

> A study on age bias in large reasoning models, comparing standard outputs with chain-of-thought outputs via the XSTest benchmark, reveals bias patterns in the reasoning process.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-12T14:15:49.000Z
- 最近活动: 2026-05-12T14:24:47.331Z
- 热度: 150.8
- 关键词: 大型语言模型, 推理模型, 年龄偏见, 思维链, XSTest, 算法公平性, Chain-of-Thought, 模型评估
- 页面链接: https://www.zingnex.cn/en/forum/thread/xstest
- Canonical: https://www.zingnex.cn/forum/thread/xstest
- Markdown 来源: floors_fallback

---

## [Introduction] Study on Age Bias in Large Language Reasoning Models: The Bidirectional Impact of Chain-of-Thought

This study focuses on the age bias issue in large reasoning models. By comparing standard output and chain-of-thought (CoT) output patterns through the XSTest benchmark framework, it explores the impact of CoT technology on the age bias performance of models. Key findings include the double-edged sword effect of CoT (both suppressing and amplifying bias), asymmetric bias of models towards different age groups, and the consistency between automatic and manual evaluations, providing empirical evidence for improving the fairness of reasoning models.

## Research Background and Motivation: Age Bias in LLM Fairness

With the widespread application of large language models (LLMs) in various fields, model fairness has received increasing attention. As an important dimension of algorithmic discrimination, age bias directly affects the quality of services for different age groups. The chain-of-thought (CoT) technology improves reasoning ability, but its impact on bias remains unclear. The core question of this study: Does CoT reasoning change the bias performance of models in age-related tasks? By comparing the two output modes through the XSTest framework, it provides a basis for improving model fairness.

## XSTest Benchmark Framework: A Key Tool for Evaluating Model Bias

XSTest (eXtreme Safety Test) is a comprehensive framework for evaluating the safety and bias of language models, covering sensitive attributes such as age and gender. Its core design includes: paired comparison design (generating parallel inputs that differ only in age), multi-dimensional evaluation (descriptive/suggestive/decision-making tasks), and quantitative bias indicators (statistical conversion into comparable scores), providing a systematic method for detecting age bias.

## Research Methods: Comparative Experiments and Dual Evaluation Mechanism

The experiment uses a comparative design: for the same test case, standard outputs (direct answers) and CoT outputs (showing reasoning processes) are collected to isolate the variable of reasoning visibility. The evaluation mechanism includes: automatic evaluation (independent LLM as judge, scalable and consistent in standards) and manual annotation (gold standard, verifying automatic evaluation and capturing subtle biases). The model selection covers mainstream reasoning models to ensure the representativeness of results.

## Key Findings: Double-Edged Sword Effect of CoT and Asymmetry of Age Bias

1. Double-edged sword of CoT: Transparent reasoning suppresses some biases, but complex scenarios (career advice/health consultation) may introduce stereotypes and amplify biases; 2. Asymmetric age bias: Implicit negative tendencies towards the elderly (limited ability), over-optimism towards young people (career innovation), and under-representation of middle-aged groups; 3. High consistency between automatic and manual evaluations, but automatic methods have limitations in capturing biases in complex contexts.

## Practical Implications: Fairness Recommendations for Model Development and Deployment

For developers: 1. Audit bias in CoT outputs (evaluate intermediate steps); 2. Incorporate age fairness indicators; 3. Continuous monitoring in production environments. For deployers: 1. Adapt CoT functions to scenarios; 2. Disclose bias limitations to users; 3. Establish user feedback loops.

## Research Limitations and Future Directions

Limitations: Focus on English context, no tracking of model dynamic changes, no in-depth exploration of causal mechanisms. Future directions: Cross-language comparison of age bias patterns, development of CoT bias mitigation techniques (prompt engineering/adversarial fine-tuning), and quantification of the impact of model bias on user decisions.
