Zing Forum

Reading

Prompt-Siren: Meta's Open-Source LLM Prompt Injection Offense and Defense Research Platform

Meta's Prompt-Siren is a research workbench specifically designed for developing and testing prompt injection attack and defense strategies for large language models (LLMs). It supports the AgentDojo and SWE-bench benchmarks, and provides fine-grained state machine control and an extensible plugin architecture.

Prompt-Siren提示注入LLM安全MetaAgentDojoSWE-benchAI安全研究HydraDocker沙箱攻防测试
Published 2026-04-07 23:15Recent activity 2026-04-07 23:22Estimated read 5 min
Prompt-Siren: Meta's Open-Source LLM Prompt Injection Offense and Defense Research Platform
1

Section 01

Introduction / Main Floor: Prompt-Siren: Meta's Open-Source LLM Prompt Injection Offense and Defense Research Platform

Meta's Prompt-Siren is a research workbench specifically designed for developing and testing prompt injection attack and defense strategies for large language models (LLMs). It supports the AgentDojo and SWE-bench benchmarks, and provides fine-grained state machine control and an extensible plugin architecture.

2

Section 02

Project Overview: Why Do We Need a Specialized Prompt Injection Research Platform

As large language models are increasingly integrated into various applications, prompt injection attacks have become one of the most pressing threats in the field of AI security. Attackers can hijack model behavior, steal sensitive information, or perform unauthorized operations through carefully crafted inputs. However, there has been a lack of standardized tool support for systematically researching and defending against such attacks.

Meta's Prompt-Siren is a professional research workbench designed specifically to address this issue. It provides a complete experimental environment that allows researchers to safely and reproducibly develop and test attack and defense strategies for LLMs.

3

Section 03

Core Design Philosophy: Fine-Grained Control and Extensibility

The design of Prompt-Siren embodies several key concepts:

4

Section 04

State Machine-Driven Execution Control

Unlike simple script-based attack testing, Prompt-Siren uses a state machine design to provide fine-grained control over agent execution. This means researchers can precisely define each stage of an attack, observe intermediate states, and intervene at key points. This design is particularly suitable for complex attack scenarios, such as progressive injection in multi-turn dialogues.

5

Section 05

Multi-Benchmark Support

The platform natively supports two important security benchmarks:

  • AgentDojo: Focuses on security evaluation in agent workflows
  • SWE-bench: Security testing based on real-world code editing tasks

This multi-benchmark support allows researchers to validate the effectiveness of attacks and defenses across different scenarios.

6

Section 06

Hydra Configuration System

Prompt-Siren uses Hydra for experiment orchestration, supporting powerful parameter scanning capabilities. Researchers can easily conduct large-scale experiments and compare performance under different configurations.

7

Section 07

Plugin-Based Architecture

The platform uses an extensible plugin system that supports customization of:

  • Agents: Define the behavior of the AI system under test
  • Attacks: Implement specific prompt injection techniques
  • Environments: Simulate different application contexts

This modular design allows the community to contribute new attack vectors and defense mechanisms, continuously enriching the research ecosystem.

8

Section 08

Resource and Cost Control

Considering the cost of LLM API calls, Prompt-Siren has built-in usage restriction mechanisms:

  • Cost cap control
  • Call count limits
  • Automatic caching and result organization

These features ensure that research can be conducted within a controllable budget.