Section 01
Introduction: Proof-of-Coherence - A New Tool for Quantifying LLM Reasoning Consistency
This article introduces an open-source framework called Proof-of-Coherence, which aims to systematically observe and quantify the reasoning consistency of large language models (LLMs). By detecting self-contradictions of models on the same problem, it provides an auditable evaluation tool for AI safety research, filling the gap in traditional LLM evaluations that lack consistency measurement.