Section 01
CoT-Flow: Reshaping the Reasoning Paradigm of Large Language Models (Introduction)
This article introduces the ACL 2026 accepted paper CoT-Flow, whose core is to transform discrete reasoning steps into continuous probabilistic flows and quantify the contribution of each step to the correct answer via Probabilistic Flow Progress (PFP). This method achieves two major breakthroughs: inference acceleration without additional training, and reinforcement learning alignment based on dense rewards.