Section 01
[Introduction] The Illusion of AGI: An Experimental Project Exploring the Capability Boundaries of Large Language Models
This article introduces the open-source project The-illusion-of-AGI, which experimentally tests the capability boundaries of current state-of-the-art large language models (LLMs), exploring whether they truly possess understanding, learning, and reasoning abilities, and distinguishing the essential difference between "statistical pattern matching" and "true intelligence". The project covers testing directions such as spatial reasoning, confidence calibration, and interactive reasoning, revealing the limitations of current LLMs.