Section 01
[Introduction] Less is More: Core Findings on Precise Regulation of LLM Engagement in Code Analysis
This article compares three architectures with different LLM engagement levels (direct generation, structured intermediate representation, Agentic generation) and challenges the intuitive assumption that "more LLM engagement equals better results". Core findings: The structured intermediate representation scheme achieves the best performance, and its token consumption is only 1/8 of the Agentic scheme, providing important insights for LLM applications in formal domains.