# LLMs Empower Health Misinformation Analysis: Decoding Rhetorical Strategies of Cow Urine Therapy on YouTube

> This study uses multi-model LLMs to analyze 100 YouTube videos, revealing distinct persuasion strategies employed by health misinformation spreaders and debunkers.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-04-24T14:31:53.000Z
- 最近活动: 2026-04-27T02:56:23.063Z
- 热度: 88.6
- 关键词: 健康谣言, LLM, 虚假信息, 修辞分析, 社交媒体, 文化话语, YouTube
- 页面链接: https://www.zingnex.cn/en/forum/thread/llm-youtube
- Canonical: https://www.zingnex.cn/forum/thread/llm-youtube
- Markdown 来源: floors_fallback

---

## [Introduction] LLMs Empower Health Misinformation Analysis: Decoding Rhetorical Strategies of Cow Urine Therapy on YouTube

This study uses multi-model LLMs to analyze 100 YouTube videos, revealing distinct persuasion strategies employed by health misinformation spreaders and debunkers. The research covers mainstream models such as the GPT-4 series and Gemini 2.5 Pro, and constructs a classification system of 14 persuasion strategy categories, providing new methods and insights for misinformation governance.

## Background: Complexity of Health Misinformation and Controversy Over Cow Urine Therapy

In the era of social media, the spread of health misinformation has become a global challenge. The problem becomes more intractable when traditional cultural beliefs intersect with modern scientific discourse—some "traditional wisdom" lacks scientific basis. Cow urine therapy is a typical case: Indian traditional culture views it as having purifying and therapeutic effects, but modern medicine lacks verification; promoters and debunkers on YouTube hold opposing views.

## Research Methods: Multi-Model LLMs and Classification of 14 Persuasion Strategies

The study selected transcribed texts from 100 YouTube videos and analyzed them using models such as the GPT-4 series (GPT-4, GPT-4o, GPT-4.1, GPT-5), Gemini 2.5 Pro, and Mistral Medium 3. A classification system of 14 persuasion strategy categories was constructed, including appeals to authority, appeals to efficacy, conspiracy framing, social proof, refutation strategies, etc.

## Key Findings: Differences in Rhetorical Strategies Between Spreaders and Debunkers

**Promoter Strategies**: Rely on appeals to efficacy (describing cure of diseases, health benefits), social proof (success stories, positive feedback), traditional authority (Ayurvedic classics/religious texts), and appeal to emotional experience rather than scientific evidence.
**Debunker Strategies**: Adopt appeals to authority (medical research, health institution statements), direct refutation (questioning logical loopholes), evidence presentation (experimental data/clinical results), and appeal to reason and scientific methods.

## LLM Annotation Reliability: Human-Machine Consistency Reaches 90.1%

The study verified the reliability of LLMs in cultural discourse analysis, with human-machine annotation consistency reaching 90.1%. This indicates that LLMs can accurately identify complex persuasion strategies, the 14-category classification system has clear differentiation, and automated annotation can serve as an effective tool for large-scale analysis, providing a methodological foundation for AI-based health misinformation monitoring.

## Implications for Misinformation Governance: Cultural Sensitivity and Differentiated Responses

1. **Cultural Sensitivity**: Health misinformation is rooted in deep-seated cultural beliefs; simple deletion/labeling can easily trigger backlash, so intervention strategies need to be designed by understanding rhetorical mechanisms.
2. **Differentiated Responses**: For promoters' social proof/efficacy appeals, diverse voices can be displayed and the visibility of scientific evidence can be enhanced.
3. **Feasibility of Automated Monitoring**: The high accuracy of LLMs provides the possibility to build real-time monitoring monitoring systems.

## Limitations and Future Directions

**局限**：The sample only includes English content, missing other language communities; cross-sectional analysis cannot capture the evolution of strategies over time; cultural specificity may limit the generalization of conclusions.
**Future Directions**: Expand multi-language and multi-platform analysis; longitudinally track the life cycle of misinformation; develop real-time intervention tools to combat misinformation early.
