Zing Forum

Reading

ICSE 2026 Cutting-Edge Research: Generating High-Quality Software Vulnerability Data Using Large Language Models

This article provides an in-depth interpretation of VICS-LLM-VulGen, a research work accepted by ICSE 2026. It is a systematic effort exploring how to use prompt engineering to optimize large language models for generating realistic vulnerability data. The research team compared the vulnerability generation capabilities of various models including GPT-4o, Claude, CodeLlama, and DeepSeek Coder, and proposed the VICS (Vulnerability-Informed Contextual Structuring) framework to significantly improve generation quality.

软件安全漏洞生成大语言模型ICSE 2026提示工程CWECodeQL数据增强安全测试
Published 2026-05-15 08:42Recent activity 2026-05-15 09:20Estimated read 4 min
ICSE 2026 Cutting-Edge Research: Generating High-Quality Software Vulnerability Data Using Large Language Models
1

Section 01

[Introduction] ICSE2026 Cutting-Edge Research: VICS-LLM-VulGen Generates High-Quality Vulnerability Data Using Large Language Models

This article interprets the VICS-LLM-VulGen research work accepted by ICSE2026. This work optimizes large language models via prompt engineering to generate realistic vulnerability data, compares the vulnerability generation capabilities of various models such as GPT-4o, Claude, and CodeLlama, and proposes the VICS framework to significantly improve generation quality, addressing the scarcity of software vulnerability data.

2

Section 02

Research Background: The Dilemma of Scarce Software Vulnerability Data

Software security testing and vulnerability detection model training rely on high-quality vulnerability data, but real-world vulnerability data is scarce (the CVE database has limited complete samples). Traditional acquisition methods (manual annotation, open-source mining) are either high-cost or have a single pattern, so exploring the use of large language models to automatically generate synthetic vulnerability data is necessary.

3

Section 03

Core Methods: VICS Framework and Multi-Model Comparison

The VICS (Vulnerability-Informed Contextual Structuring) framework is proposed, which injects structured context such as CWE classification and vulnerability trigger conditions into prompts. A comparison is made among closed-source models (GPT-4o, Claude), open-source code-specific models (CodeLlama 34B, DeepSeek Coder), general-purpose models (Llama3, Qwen2.5), and reasoning-enhanced models (DeepSeek R1), and the results are generalizable.

4

Section 04

Experimental Design and Evidence Validation

Experiments are conducted around five research questions: RQ1 sample generation pipeline; RQ2 dataset division and editing; RQ3 comparison with traditional tools (VGX, VulGen); RQ4 validation of the mapping between generated samples and real CVEs using CodeQL; RQ5 evaluation of downstream practical value based on RAG. The tech stack includes Python, PyTorch, CodeQL, Joern, etc., and the project is open-source (MIT license).

5

Section 05

Research Findings and Practical Value

The VICS framework significantly improves the authenticity and diversity of generated samples. Practical value: 1. Provides low-cost data augmentation for vulnerability detection models; 2. Supports security test case generation; 3. Serves as security training materials; 4. Provides benchmark samples for static analysis tools.

6

Section 06

Limitations and Future Directions

Limitations: Generated samples require manual/automated filtering of low-quality outputs; it focuses on C/C++ memory vulnerabilities and lacks support for other languages and vulnerability types. Future directions: Prompt optimization via reinforcement learning, multi-language framework, automatic validation pipeline, and integration of multi-modal inputs.