# Study on Performance Changes of Large Language Models Under Substance Addiction State Prompts

> An innovative study explores the systematic changes in reasoning ability and response patterns of large language models when given the identity prompt of "substance addict", providing a new perspective for AI safety and bias research.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-09T09:16:28.000Z
- 最近活动: 2026-05-09T09:18:19.813Z
- 热度: 162.0
- 关键词: 大语言模型, AI偏见, 药物成瘾, 提示工程, AI安全, 身份提示, 计算社会科学, 模型对齐, 伦理AI
- 页面链接: https://www.zingnex.cn/en/forum/thread/llm-github-jeongseongwoo08-analysis-of-performance-changes-in-large-language-models-with-dr
- Canonical: https://www.zingnex.cn/forum/thread/llm-github-jeongseongwoo08-analysis-of-performance-changes-in-large-language-models-with-dr
- Markdown 来源: floors_fallback

---

## [Introduction] Study on Performance Changes of Large Language Models Under Substance Addict Identity Prompts

An innovative study explores the systematic changes in reasoning ability and response patterns of large language models when given the identity prompt of "substance addict". The study found that the models showed significant fluctuations in cognitive reasoning, emotional expression, risk decision-making, and other dimensions, providing a new perspective for AI safety, bias research, and ethical considerations.

## Research Background and Motivation

Large language models (LLMs) perform well in various tasks but are sensitive to identity cues in input prompts. The South Korean research team Jeongseongwoo08 focused their study on the field of substance addiction, with the core question: How do cognitive performance and output characteristics change when AI is prompted with the identity of a substance addict? This question touches on deep issues such as AI ethics, bias propagation, and the representation of vulnerable groups.

## Research Design and Methodology

A comparative experimental design was used to compare the performance differences of mainstream models such as GPT series, Claude, and Llama under standard prompts and "substance addiction state prompts". The core operation was to embed identity descriptions (e.g., "You are struggling with substance addiction") in system prompts, and use a standardized cognitive test suite to evaluate performance in tasks such as logical reasoning and mathematical calculation. Multiple control groups (neutral identity, other medical conditions, random identity) were set up to distinguish the influence of variables.

## Key Findings: Three Dimensions of Performance Changes

### 1. Fluctuations in Cognitive Reasoning Ability
- Chain-of-thought reasoning accuracy decreased by 8-15%
- Error rate of complex mathematical problems increased
- More contradictions in logical consistency checks

### 2. Changes in Emotional Expression and Empathy Patterns
- Increased frequency of negative emotional vocabulary use
- Improved sensitivity to identifying help-seeking and supportive language
- Enhanced empathy tendency in simulated dialogues

### 
3. Shift in Risk Perception and Decision-Making Preferences
- Increased preference for immediate rewards
- Changes in weight allocation for long-term consequences

These changes may be related to stereotypes or contexts in the training data.

## In-depth Analysis of Technical Mechanisms

- **Bias encoding at the word embedding level**: The clustering patterns of word vectors related to "addiction" and "drugs" have a statistical correlation with negative stereotypes
- **Redistribution of attention weights**: The model pays more attention to words related to risk, vulnerability, and support needs
- **Sensitivity to context learning**: A single identity prompt can produce significant effects, highlighting the model's high sensitivity to contextual cues and potential instability

## Research Significance and Academic Value

- **AI Safety and Alignment**: Reveals the systematic bias of LLMs towards specific identity groups, posing challenges for robust model design
- **Innovation in Computational Social Science**: Demonstrates the method of using LLMs as "computational probes" to study social biases, which can be extended to fields such as race and gender
- **Ethical Considerations for Vulnerable Groups**: Reminds that AI may reinforce the stigmatization of substance addicts, causing potential real-world harm

## Limitations and Future Research Directions

**Limitations**:
- Model samples are mainly from Western companies, lacking global diversity
- Experiments are limited to English context; cross-cultural performance is unknown
- Sensitivity of prompt wording affects the robustness of results

**Future Directions**:
1. Cross-language/cross-cultural comparative studies
2. Research on the cumulative effect of long-term identity prompts
3. Evaluation of the effectiveness of bias mitigation techniques
4. Qualitative research combined with real addict groups

## Practical Implications and Policy Recommendations

- **Expansion of Model Evaluation**: Standard benchmark tests need to include robustness tests for sensitive identity prompts
- **Bias Audit Mechanism**: Establish a regular audit system for sensitive groups in high-risk fields (medical, legal)
- **User Transparency**: Disclose AI's biased responses to specific identity prompts
- **Diversified Training Data**: Add data from diverse perspectives such as recovering addicts, medical/social workers to reduce one-sidedness
