Zing Forum

Reading

LLMs Empower Health Misinformation Analysis: Decoding Rhetorical Strategies of Cow Urine Therapy on YouTube

This study uses multi-model LLMs to analyze 100 YouTube videos, revealing distinct persuasion strategies employed by health misinformation spreaders and debunkers.

健康谣言LLM虚假信息修辞分析社交媒体文化话语YouTube
Published 2026-04-24 22:31Recent activity 2026-04-27 10:56Estimated read 6 min
LLMs Empower Health Misinformation Analysis: Decoding Rhetorical Strategies of Cow Urine Therapy on YouTube
1

Section 01

[Introduction] LLMs Empower Health Misinformation Analysis: Decoding Rhetorical Strategies of Cow Urine Therapy on YouTube

This study uses multi-model LLMs to analyze 100 YouTube videos, revealing distinct persuasion strategies employed by health misinformation spreaders and debunkers. The research covers mainstream models such as the GPT-4 series and Gemini 2.5 Pro, and constructs a classification system of 14 persuasion strategy categories, providing new methods and insights for misinformation governance.

2

Section 02

Background: Complexity of Health Misinformation and Controversy Over Cow Urine Therapy

In the era of social media, the spread of health misinformation has become a global challenge. The problem becomes more intractable when traditional cultural beliefs intersect with modern scientific discourse—some "traditional wisdom" lacks scientific basis. Cow urine therapy is a typical case: Indian traditional culture views it as having purifying and therapeutic effects, but modern medicine lacks verification; promoters and debunkers on YouTube hold opposing views.

3

Section 03

Research Methods: Multi-Model LLMs and Classification of 14 Persuasion Strategies

The study selected transcribed texts from 100 YouTube videos and analyzed them using models such as the GPT-4 series (GPT-4, GPT-4o, GPT-4.1, GPT-5), Gemini 2.5 Pro, and Mistral Medium 3. A classification system of 14 persuasion strategy categories was constructed, including appeals to authority, appeals to efficacy, conspiracy framing, social proof, refutation strategies, etc.

4

Section 04

Key Findings: Differences in Rhetorical Strategies Between Spreaders and Debunkers

Promoter Strategies: Rely on appeals to efficacy (describing cure of diseases, health benefits), social proof (success stories, positive feedback), traditional authority (Ayurvedic classics/religious texts), and appeal to emotional experience rather than scientific evidence. Debunker Strategies: Adopt appeals to authority (medical research, health institution statements), direct refutation (questioning logical loopholes), evidence presentation (experimental data/clinical results), and appeal to reason and scientific methods.

5

Section 05

LLM Annotation Reliability: Human-Machine Consistency Reaches 90.1%

The study verified the reliability of LLMs in cultural discourse analysis, with human-machine annotation consistency reaching 90.1%. This indicates that LLMs can accurately identify complex persuasion strategies, the 14-category classification system has clear differentiation, and automated annotation can serve as an effective tool for large-scale analysis, providing a methodological foundation for AI-based health misinformation monitoring.

6

Section 06

Implications for Misinformation Governance: Cultural Sensitivity and Differentiated Responses

  1. Cultural Sensitivity: Health misinformation is rooted in deep-seated cultural beliefs; simple deletion/labeling can easily trigger backlash, so intervention strategies need to be designed by understanding rhetorical mechanisms.
  2. Differentiated Responses: For promoters' social proof/efficacy appeals, diverse voices can be displayed and the visibility of scientific evidence can be enhanced.
  3. Feasibility of Automated Monitoring: The high accuracy of LLMs provides the possibility to build real-time monitoring monitoring systems.
7

Section 07

Limitations and Future Directions

局限:The sample only includes English content, missing other language communities; cross-sectional analysis cannot capture the evolution of strategies over time; cultural specificity may limit the generalization of conclusions. Future Directions: Expand multi-language and multi-platform analysis; longitudinally track the life cycle of misinformation; develop real-time intervention tools to combat misinformation early.