Section 01
PromptAudit: An End-to-End Platform for Systematically Evaluating the Impact of Prompt Engineering on Code Vulnerability Detection
In the field of AI security, accurately evaluating large language models' (LLM) ability to detect code vulnerabilities has always been a core challenge. PromptAudit is an end-to-end experimental platform specifically designed for systematically studying the impact of prompt engineering techniques on code security classification. By fixing variables such as datasets and model backends and only changing prompt strategies, it enables controlled comparative experiments, helping researchers understand the real impact of prompt strategies on vulnerability detection performance.