Section 01
Introduction: A Non-Neural Network Approach for Lightweight Large Model Hallucination Detection
This article presents a lightweight large model hallucination detection framework that does not require neural networks. It corely uses TF-IDF and cosine similarity, combined with Wikipedia evidence retrieval to verify factual claims in LLM outputs. This approach compares the credibility performance of three open-source models: Llama-2, Mistral-7B, and Qwen-2. It features lightweight design and strong interpretability, providing a feasible hallucination detection path for resource-constrained scenarios.