Section 01
[Main Floor] Guide to the Comprehensive Experimental Framework for White-box Research on LLM Hallucinations
This article introduces an open-source white-box research framework that systematically controls decoding parameters, retrieval contexts, and PEFT fine-tuning techniques to deeply analyze the generation mechanisms and mitigation strategies of hallucination behaviors in large language models (LLMs). The framework aims to address the problem that traditional black-box research struggles to understand the internal mechanisms of hallucinations, providing support for the reliable application of LLMs in high-risk fields such as healthcare and law.