Section 01
[Introduction] Core Summary of the Empirical Study on Large Language Models in Vulnerability Analysis of Automotive Binary Programs
This paper conducts an empirical study on the application of large language models (LLMs) in vulnerability analysis of automotive binary programs, exploring their capabilities, limitations, and practical application prospects in the field of automotive software security. The study finds that while LLMs show potential in vulnerability detection, they have problems such as insufficient cross-architecture generalization and high false positive rates; integrating with traditional static analysis tools can improve detection coverage and accuracy, providing a new path for automotive software security testing.