Section 01
[Introduction] Proof of Coherence: An Open-Source Observatory for LLM Reasoning Consistency
This article introduces the Proof of Coherence project, an open-source observatory for systematically measuring the reasoning consistency of large language models (LLMs). The project focuses on the self-contradiction phenomenon of LLMs, and through an auditable experimental framework, formal consistency metrics, and open methodologies, it provides a scientific foundation for understanding and improving AI reasoning consistency, helping to enhance AI reliability.