Section 01
[Introduction] Large Language Models and Adversarial Malware: Current Status and Outlook of AI-Driven Cyber Threats
This study focuses on the capabilities of large language models (LLMs) in generating adversarial malware, corely exploring the potential and limitations of current AI technologies in both offensive and defensive cyber security. The research aims to answer: What is the current level of LLM-generated adversarial malware? How far are we from the scenario of "AI autonomously generating malicious code that bypasses detection"? Its results provide important references for future security defense layout.