# Citation Dilemma of LLM Deep Research Agents: Cited but Unverified

> The first systematic evaluation framework for the citation quality of LLM deep research agents reveals: even the strongest models have a factual accuracy rate of only 39-77%, and more retrieval does not mean more accurate citations.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-07T17:46:45.000Z
- 最近活动: 2026-05-08T07:23:07.453Z
- 热度: 124.4
- 关键词: LLM, 深度研究, 引用验证, RAG, 事实核查, 智能体评估
- 页面链接: https://www.zingnex.cn/en/forum/thread/llm-5ef75cbd
- Canonical: https://www.zingnex.cn/forum/thread/llm-5ef75cbd
- Markdown 来源: floors_fallback

---

## Citation Dilemma of LLM Deep Research Agents: Cited but Unverified (Introduction)

This article focuses on the citation reliability issue of LLM-driven deep research agents. The first systematic evaluation framework reveals: even the strongest models have a factual accuracy rate of only 39-77%, and more retrieval does not mean more accurate citations. This article will discuss from the background, evaluation framework, findings, analysis, and future directions.

## Research Background and Problem Definition

With the rise of tools like OpenAI Deep Research, LLM research assistants have changed the way information is obtained, being able to automatically generate reports with citations. However, a key issue has been overlooked: are these citations reliable? Existing evaluations either verify claims in isolation or trust the model's self-assessment, lacking systematic inspection of the citations themselves, leading users to easily mistake citations as representing credible content.

## Evaluation Framework Design

The research team launched the first source code attribution evaluation framework, which extracts and evaluates inline citations in LLM-generated reports through a reproducible AST parser. The framework closes the verification loop, allowing evaluators to judge citation quality based on source materials. The evaluation dimensions include: 1. Link validity (verifying URL accessibility); 2. Content relevance (degree of topic alignment); 3. Fact-checking (consistency between source content and claims, the most critical dimension).

## Key Findings

Benchmark test results for 14 closed-source and open-source LLMs: 1. The strongest cutting-edge models have a link validity rate of over 94% and relevance rate of over 80%; 2. Factual accuracy rate is only 39-77%; 3. Less than half of open-source models can generate reports with citations in one attempt. Ablation studies show: when the number of tool calls increases from 2 to 150, the fact-checking accuracy rate of two cutting-edge models decreases by an average of about 42%, proving that more retrieval does not mean more accurate citations.

## In-depth Analysis and Implications

Reasons for the disconnect between surface citation quality and factual reliability: 1. Training bias: models learn citation formats but do not master strict fact-checking; 2. Retrieval noise: a large number of retrievals introduce more irrelevant information, reducing accuracy; 3. Generation pressure: models tend to generate fluent text rather than strictly accurate reports.

## Framework Value and Future Directions

This evaluation framework provides a tool to solve the citation reliability problem. For users: the number of citations ≠ credibility; for developers: more strict fact verification mechanisms and intelligent retrieval strategies are needed. With the popularization of AI content, establishing a reliable citation verification mechanism is key to the health of the information ecosystem, and this research lays the foundation for this.
