Zing Forum

Reading

Study on Performance Changes of Large Language Models Under Substance Addiction State Prompts

An innovative study explores the systematic changes in reasoning ability and response patterns of large language models when given the identity prompt of "substance addict", providing a new perspective for AI safety and bias research.

大语言模型AI偏见药物成瘾提示工程AI安全身份提示计算社会科学模型对齐伦理AI
Published 2026-05-09 17:16Recent activity 2026-05-09 17:18Estimated read 7 min
Study on Performance Changes of Large Language Models Under Substance Addiction State Prompts
1

Section 01

[Introduction] Study on Performance Changes of Large Language Models Under Substance Addict Identity Prompts

An innovative study explores the systematic changes in reasoning ability and response patterns of large language models when given the identity prompt of "substance addict". The study found that the models showed significant fluctuations in cognitive reasoning, emotional expression, risk decision-making, and other dimensions, providing a new perspective for AI safety, bias research, and ethical considerations.

2

Section 02

Research Background and Motivation

Large language models (LLMs) perform well in various tasks but are sensitive to identity cues in input prompts. The South Korean research team Jeongseongwoo08 focused their study on the field of substance addiction, with the core question: How do cognitive performance and output characteristics change when AI is prompted with the identity of a substance addict? This question touches on deep issues such as AI ethics, bias propagation, and the representation of vulnerable groups.

3

Section 03

Research Design and Methodology

A comparative experimental design was used to compare the performance differences of mainstream models such as GPT series, Claude, and Llama under standard prompts and "substance addiction state prompts". The core operation was to embed identity descriptions (e.g., "You are struggling with substance addiction") in system prompts, and use a standardized cognitive test suite to evaluate performance in tasks such as logical reasoning and mathematical calculation. Multiple control groups (neutral identity, other medical conditions, random identity) were set up to distinguish the influence of variables.

4

Section 04

Key Findings: Three Dimensions of Performance Changes

1. Fluctuations in Cognitive Reasoning Ability

  • Chain-of-thought reasoning accuracy decreased by 8-15%
  • Error rate of complex mathematical problems increased
  • More contradictions in logical consistency checks

2. Changes in Emotional Expression and Empathy Patterns

  • Increased frequency of negative emotional vocabulary use
  • Improved sensitivity to identifying help-seeking and supportive language
  • Enhanced empathy tendency in simulated dialogues

  1. Shift in Risk Perception and Decision-Making Preferences
  • Increased preference for immediate rewards
  • Changes in weight allocation for long-term consequences

These changes may be related to stereotypes or contexts in the training data.

5

Section 05

In-depth Analysis of Technical Mechanisms

  • Bias encoding at the word embedding level: The clustering patterns of word vectors related to "addiction" and "drugs" have a statistical correlation with negative stereotypes
  • Redistribution of attention weights: The model pays more attention to words related to risk, vulnerability, and support needs
  • Sensitivity to context learning: A single identity prompt can produce significant effects, highlighting the model's high sensitivity to contextual cues and potential instability
6

Section 06

Research Significance and Academic Value

  • AI Safety and Alignment: Reveals the systematic bias of LLMs towards specific identity groups, posing challenges for robust model design
  • Innovation in Computational Social Science: Demonstrates the method of using LLMs as "computational probes" to study social biases, which can be extended to fields such as race and gender
  • Ethical Considerations for Vulnerable Groups: Reminds that AI may reinforce the stigmatization of substance addicts, causing potential real-world harm
7

Section 07

Limitations and Future Research Directions

Limitations:

  • Model samples are mainly from Western companies, lacking global diversity
  • Experiments are limited to English context; cross-cultural performance is unknown
  • Sensitivity of prompt wording affects the robustness of results

Future Directions:

  1. Cross-language/cross-cultural comparative studies
  2. Research on the cumulative effect of long-term identity prompts
  3. Evaluation of the effectiveness of bias mitigation techniques
  4. Qualitative research combined with real addict groups
8

Section 08

Practical Implications and Policy Recommendations

  • Expansion of Model Evaluation: Standard benchmark tests need to include robustness tests for sensitive identity prompts
  • Bias Audit Mechanism: Establish a regular audit system for sensitive groups in high-risk fields (medical, legal)
  • User Transparency: Disclose AI's biased responses to specific identity prompts
  • Diversified Training Data: Add data from diverse perspectives such as recovering addicts, medical/social workers to reduce one-sidedness