Zing Forum

Reading

ComicJailbreak: How Comic Narratives Bypass the Safety Alignment Mechanisms of Multimodal Large Language Models

A Singaporean research team proposes the ComicJailbreak dataset, revealing a new attack vector where embedding harmful objectives into structured visual narratives can bypass the safety protections of Multimodal Large Language Models (MLLMs)

MLLM多模态安全越狱攻击视觉叙事AI安全对齐漫画数据集对抗攻击
Published 2026-03-30 13:37Recent activity 2026-03-30 13:51Estimated read 10 min
ComicJailbreak: How Comic Narratives Bypass the Safety Alignment Mechanisms of Multimodal Large Language Models
1

Section 01

[Introduction] ComicJailbreak: New Findings on How Comic Narratives Bypass Safety Alignment of Multimodal Large Language Models

A Singaporean research team proposes the ComicJailbreak dataset, revealing a new attack vector where embedding harmful objectives into structured visual narratives (such as comics) can bypass the safety protections of Multimodal Large Language Models (MLLMs). This thread will introduce the research background, dataset design, attack mechanism, safety challenges, and defense recommendations in separate floors.

2

Section 02

Research Background: The Underestimated Safety Risks of Visual Modality

Research Background and Problem Awareness

With the rapid development of Multimodal Large Language Models (MLLMs), their safety alignment mechanisms have become a focus of attention in academia and industry. Traditionally, safety research has mainly focused on text-level attacks, such as prompt injection and jailbreak attacks. However, as one of the core inputs of MLLMs, the potential safety risks of the visual modality have long been underestimated. The research team from the Social AI Studio (Singapore) raises a key question: When harmful objectives are embedded into structured visual narratives, can MLLMs still adhere to their safety policies?

As a unique narrative medium, comics naturally have structured and serialized characteristics, constructing a complete narrative logic through continuous frames, dialogue boxes, and visual elements. They provide a hidden carrier for attackers, who can split and disperse harmful intentions across multiple visual elements, potentially bypassing traditional text-based safety detection mechanisms.

3

Section 03

ComicJailbreak Dataset: Design Philosophy and Construction Process

Overview of the ComicJailbreak Dataset

The core contribution of the ComicJailbreak project is the construction of a comic jailbreak dataset specifically for evaluating the safety of MLLMs. The dataset's design philosophy is innovative: it does not rely on explicit harmful text prompts; instead, it encodes potential harmful objectives into the visual narrative structure of comics, simulating complex real-world attack scenarios—where attackers use seemingly harmless image sequences to induce models to output content that violates safety policies.

The dataset construction process reflects an in-depth understanding of practical application scenarios: different types of comic samples (such as article-style) are generated via the create_dataset.py script, and its modular design facilitates academic reproduction and subsequent expansion of safety testing. Currently, the dataset has been publicly released, and the inference and evaluation code will be launched sequentially.

4

Section 04

Attack Mechanism: Leveraging the Progressiveness and Context Dependency of Visual Narratives

Technical Principles and Attack Mechanism

The core mechanism of the ComicJailbreak attack lies in leveraging the progressiveness and context dependency of visual narratives. When MLLMs process comic inputs, they need to integrate information from multiple frames to understand the complete storyline. Attackers can use this feature to combine individual harmless elements into a narrative structure with specific intentions.

It specifically involves the following aspects:

Semantic manipulation of frame sequences: Carefully designing the order of frames to guide the model along a specific reasoning path. A single frame does not trigger an alert, but the serialized combination may lead to harmful outputs.

Coordination between dialogue boxes and visual elements: The text in comic dialogue boxes is short, but combined with the visual context, it carries rich semantics. Attackers can use the interweaving of text and images to disperse harmful intentions across multiple modalities.

Inductive effect of narrative structure: The narrative structure of comics guides the model's expectations; by manipulating the rhythm and plot development, attackers can induce the model to generate responses that violate safety policies.

5

Section 05

Challenges and Insights: The Necessity of Multimodal Safety Evaluation

Challenges and Insights for Safety Alignment

The findings of the ComicJailbreak research pose severe challenges to current MLLM safety alignment efforts: Traditional safety training is based on text datasets, reinforcing safe behavior by rejecting harmful prompts. However, when the attack vector shifts to multimodal visual narratives, single-modality protection mechanisms are insufficient.

Insights:

  1. Necessity of multimodal safety evaluation: With the popularity of visual-language models like GPT-4V and Gemini, safety research needs to go beyond text boundaries and establish a comprehensive evaluation framework covering images, videos, and other multimodal content.

  2. Risks of structured content: Structured visual content such as comics and video frame sequences have inherent logical connections and may be maliciously exploited. Safety mechanisms need to understand the semantic relationships across elements.

  3. Limitations of existing safety training data: Training data lacks sufficient multimodal adversarial samples, making it difficult for models to identify and resist new types of attacks.

6

Section 06

Defense Recommendations: Layered Protection from Training to Inference

Practical Applications and Defense Recommendations

Practical guidance for MLLM developers and deployers:

  • Training phase: Introduce more diverse multimodal safety data, especially adversarial samples containing complex visual narratives.

  • Inference phase: Implement layered detection mechanisms that not only analyze individual input elements but also evaluate the combined effects between elements.

  • Dynamic evaluation system: Static test sets are prone to obsolescence; the modular dataset construction method of ComicJailbreak provides a feasible path for continuously updating test cases.

7

Section 07

Conclusion: A New Milestone in Multimodal AI Safety Research

Conclusion and Future Outlook

ComicJailbreak represents an important milestone in multimodal AI safety research, revealing a new attack vector and reminding us that technological progress must be accompanied by enhanced breadth and depth of safety protection. The research value of visual narratives in the field of AI safety has just been touched upon.

With the upcoming release of inference and evaluation code, we look forward to more researchers joining to jointly build a more robust multimodal AI safety system.


Project Link: https://github.com/Social-AI-Studio/ComicJailbreak

Related Paper: Structured Visual Narratives Undermine Safety Alignment in Multimodal Large Language Models (arXiv:2603.21697)