Section 01
[Introduction] Analyzing the Reasoning Ability of Large Language Models via Conditional Entropy
This study uses conditional entropy, an information theory tool, to deeply analyze the reasoning mechanisms of large language models (LLMs), providing a new quantitative perspective for understanding and evaluating the reasoning ability of LLMs. Subsequent floors will discuss in detail the research background, theoretical foundation, methodology, experimental findings, application prospects, technical challenges, and conclusions.