Section 01
[Introduction] FHI: A New Framework for LLM Hallucination Detection Based on Causal Attribution Alignment
This article introduces the Faithfulness-Hallucination Index (FHI), a new composite metric that detects hallucinations in large language models by analyzing the alignment between model explanations and internal attribution signals. The framework evaluates the credibility of outputs from four complementary dimensions, providing an interpretable new approach for LLM hallucination detection.