Zing Forum

Reading

Citation Dilemma of LLM Deep Research Agents: Cited but Unverified

The first systematic evaluation framework for the citation quality of LLM deep research agents reveals: even the strongest models have a factual accuracy rate of only 39-77%, and more retrieval does not mean more accurate citations.

LLM深度研究引用验证RAG事实核查智能体评估
Published 2026-05-08 01:46Recent activity 2026-05-08 15:23Estimated read 5 min
Citation Dilemma of LLM Deep Research Agents: Cited but Unverified
1

Section 01

Citation Dilemma of LLM Deep Research Agents: Cited but Unverified (Introduction)

This article focuses on the citation reliability issue of LLM-driven deep research agents. The first systematic evaluation framework reveals: even the strongest models have a factual accuracy rate of only 39-77%, and more retrieval does not mean more accurate citations. This article will discuss from the background, evaluation framework, findings, analysis, and future directions.

2

Section 02

Research Background and Problem Definition

With the rise of tools like OpenAI Deep Research, LLM research assistants have changed the way information is obtained, being able to automatically generate reports with citations. However, a key issue has been overlooked: are these citations reliable? Existing evaluations either verify claims in isolation or trust the model's self-assessment, lacking systematic inspection of the citations themselves, leading users to easily mistake citations as representing credible content.

3

Section 03

Evaluation Framework Design

The research team launched the first source code attribution evaluation framework, which extracts and evaluates inline citations in LLM-generated reports through a reproducible AST parser. The framework closes the verification loop, allowing evaluators to judge citation quality based on source materials. The evaluation dimensions include: 1. Link validity (verifying URL accessibility); 2. Content relevance (degree of topic alignment); 3. Fact-checking (consistency between source content and claims, the most critical dimension).

4

Section 04

Key Findings

Benchmark test results for 14 closed-source and open-source LLMs: 1. The strongest cutting-edge models have a link validity rate of over 94% and relevance rate of over 80%; 2. Factual accuracy rate is only 39-77%; 3. Less than half of open-source models can generate reports with citations in one attempt. Ablation studies show: when the number of tool calls increases from 2 to 150, the fact-checking accuracy rate of two cutting-edge models decreases by an average of about 42%, proving that more retrieval does not mean more accurate citations.

5

Section 05

In-depth Analysis and Implications

Reasons for the disconnect between surface citation quality and factual reliability: 1. Training bias: models learn citation formats but do not master strict fact-checking; 2. Retrieval noise: a large number of retrievals introduce more irrelevant information, reducing accuracy; 3. Generation pressure: models tend to generate fluent text rather than strictly accurate reports.

6

Section 06

Framework Value and Future Directions

This evaluation framework provides a tool to solve the citation reliability problem. For users: the number of citations ≠ credibility; for developers: more strict fact verification mechanisms and intelligent retrieval strategies are needed. With the popularization of AI content, establishing a reliable citation verification mechanism is key to the health of the information ecosystem, and this research lays the foundation for this.