Section 01
[Introduction] New Discovery in LLM Theory of Mind: Can Understand Others but Not Themselves
Latest research finds that cutting-edge large language models (LLMs) have selective deficits in theory of mind tests: they can accurately infer others' cognitive states but fail at self-modeling tasks unless provided with reasoning traces as an aid. This discovery reveals the asymmetry in LLMs' theory of mind capabilities and provides a new perspective for research on AI cognitive mechanisms.