Section 01
Introduction: Core Overview of Research Resource Compilation on LLM Reliability
This article systematically reviews cutting-edge research on uncertainty quantification, reliability assessment, and adversarial robustness of large language models (LLMs), covering key topics such as confidence calibration, hallucination detection, and adversarial attack defense, and provides researchers with a comprehensive technical roadmap. The resource library maintained by Johns Hopkins University compiles core papers, tools, and methodologies in this field to help navigate research directions.