Zing Forum

Reading

ProjectTextAttack: A Study on Robustness Evaluation of Large Language Models Against Jailbreak Attacks

A systematic study based on the TextAttack framework that evaluates the security of mainstream large language models using 11 jailbreak attack techniques, revealing the vulnerability of current model safety alignment mechanisms.

越狱攻击AI安全大语言模型对抗攻击安全对齐提示工程LLaMAQwen模型评估
Published 2026-03-29 20:38Recent activity 2026-03-29 20:49Estimated read 6 min
ProjectTextAttack: A Study on Robustness Evaluation of Large Language Models Against Jailbreak Attacks
1

Section 01

ProjectTextAttack: Guide to the Study on Robustness Evaluation of Large Language Models Against Jailbreak Attacks

This study is based on the TextAttack framework and evaluates the security of three mainstream open-source large language models—LLaMA3.3, GPT-OSS, and Qwen3—using 11 jailbreak attack techniques. The core question is whether the current model safety alignment mechanisms can resist structured jailbreak attacks. The study found that GPT-OSS exhibits excellent resistance (attack success rate of only 5%), while LLaMA3.3 has the most severe vulnerabilities (attack success rate of 70%), revealing differences in the vulnerability of safety alignment mechanisms among mainstream models.

2

Section 02

Research Background: Practical Challenges of AI Safety Alignment

As the capabilities of large language models improve, ensuring they are not maliciously used to generate harmful content has become a core issue in AI safety. Developers invest significant resources in safety alignment training, but attackers continue to develop "jailbreak" techniques to bypass safety guardrails. This project was completed by an ECE Bachelor student team (including Philippe PENG) under the guidance of mentors Yann FORNIER and Simon VANDAMME, aiming to systematically evaluate the robustness of mainstream LLMs against jailbreak attacks.

3

Section 03

Research Methods: Framework Extension and Dataset Construction

The study is based on the TextAttack framework (extending its jailbreak testing capabilities for generative LLMs) and manually constructs a dataset containing 141 prompts, covering 11 jailbreak attack techniques: DAN style, academic framework, developer mode, code obfuscation, fictional narrative, historical role-playing, hypothetical distance, instruction manipulation, language switching, film/game scenarios, and social engineering. Each prompt includes metadata such as id, technique, and quest, stored in CSV format.

4

Section 04

Experimental Design and Test Models

The test models include three mainstream open-source LLMs:

Model Type Version API Platform Temperature Top-p
LLaMA3.3 Open-source llama-3.3-70b-versatile Groq 0.7 0.9
GPT-OSS Open-source openai/gpt-oss-120b Groq 0.7 0.9
Qwen3 Open-source qwen/qwen3-32b Groq 0.7 0.9
All model parameters are consistent, and environmental consistency is ensured through the promptfoo evaluation framework and Docker containerized deployment.
5

Section 05

Core Results: Significant Differences in Model Safety Performance

Evaluation metrics include Attack Success Rate (ASR), Personality Adoption Rate, and Hallucination Rate. The results are as follows:

Model ASR (%) Personality Adoption Rate (%) Hallucination Rate (%)
llama-3.3-70b 70.0 20.0 3.6
qwen3-32b 58.6 15.7 2.9
gpt-oss-120b 5.0 0.7 0.7
Key findings: GPT-OSS has the strongest resistance, LLaMA3.3 has the most severe vulnerabilities, and Qwen3 performs moderately.
6

Section 06

Research Implications and Recommendations

  1. Safety alignment requires continuous iteration: Even after alignment training, LLMs are still vulnerable to structured attacks; 2. Safety responsibility for open-source models: The high ASR of LLaMA and Qwen raises concerns about the risk of misuse of open-source models; 3. Standardization of evaluation: This project demonstrates the importance of systematic evaluation (standardized attack classification, unified processes, multi-dimensional metrics), which can provide data support for model selection and safety improvements.