Section 01
[Introduction] Discovering Shared Logical Subspaces within LLMs: A New Breakthrough in Enhancing Reasoning Capabilities
Core research findings of this paper: There exist cross-perspective shared logical subspaces (spanning natural language and symbolic perspectives) within LLMs. These subspaces can be extracted using canonical correlation analysis (CCA), and a training-free reasoning guidance method is designed to generate content directionally along these subspaces, achieving up to an 11-percentage-point accuracy improvement on logical reasoning benchmarks. This result provides a new path for understanding the logical reasoning mechanisms of LLMs and for neural-symbolic fusion.