Zing Forum

Reading

LLM System Instruction Security Vulnerability: Encoding Attacks Can Bypass Protections to Steal Sensitive Information

Researchers have found that by repackaging information extraction requests as encoding or structured output tasks, attackers can bypass LLM's rejection mechanisms and successfully steal sensitive content from system instructions. The study proposes an automated evaluation framework and a chain-of-thought-based mitigation strategy.

LLM安全系统指令泄露编码攻击提示词注入AI安全OWASP思维链安全防护
Published 2026-04-01 23:45Recent activity 2026-04-02 11:19Estimated read 6 min
LLM System Instruction Security Vulnerability: Encoding Attacks Can Bypass Protections to Steal Sensitive Information
1

Section 01

[Introduction] LLM System Instruction Security Vulnerability: Encoding Attacks Can Bypass Protections to Steal Sensitive Information

Researchers have found that attackers can bypass LLM's rejection mechanisms and steal sensitive content from system instructions by packaging information extraction requests as encoding or structured output tasks. The study proposes an automated evaluation framework and a chain-of-thought-based mitigation strategy, providing a new direction for LLM security protection.

2

Section 02

Background: Security Risks of System Instructions and OWASP Risks

In LLM applications, system instructions define model behavior guidelines and security policies, often containing sensitive information such as API keys and internal policies. System instruction leakage is one of the top ten security risks for OWASP LLM applications; once obtained by attackers, they can understand internal mechanisms, gain credentials for restricted resources, or bypass security measures.

3

Section 03

Blind Spots in Current Protection: Limitations of Rejection-Based Strategies

Most LLM applications adopt rejection-based protection strategies to reject direct requests for system instructions. However, this strategy assumes attackers only use direct queries, ignoring the possibility that attackers can bypass protection by reconstructing requests (e.g., packaging as encoding/structured output tasks).

4

Section 04

Encoding Attacks: A New Way to Bypass Protection and Test Results

The study reveals the 'structured serialization attack': attackers can bypass rejection mechanisms by asking the model to output configuration information in formats like JSON, YAML, or Base64. Testing on 4 mainstream models and 46 system instructions showed an attack success rate of over 70%, as the model focuses more on format requirements than security constraints.

5

Section 05

Analysis of Attack Mechanism: Why Do Encoding Attacks Succeed?

The success of encoding attacks is related to LLM's attention mechanism and instruction hierarchy. When directly queried, security instructions have high activation; however, encoding requests are packaged as technical tasks, reducing the activation of security instructions, and the model focuses more on format output. Additionally, system instruction design does not fully consider indirect leakage, as developers' mindset is focused on protecting against direct queries.

6

Section 06

Mitigation Strategy: Chain-of-Thought Reshaping to Enhance System Instruction Security

The study proposes using a Chain-of-Thought (CoT) reasoning model to reshape system instructions: let the CoT model analyze the original instructions and generate semantically equivalent but more robust versions, adding protection against indirect attacks (e.g., explicitly prohibiting system instructions from being output in any format). This strategy does not require retraining the model, can be deployed quickly, and significantly reduces the attack success rate.

7

Section 07

Industry Insights: Transition from Protection to Comprehensive Security Design

Recommendations for the industry: 1. Traditional rejection strategies are insufficient; need to shift to preventing leakage in any form, and strengthen threat modeling and testing. 2. System instructions should explicitly prohibit indirect leakage and be audited regularly with automated tools. 3. When enterprises choose third-party LLM services, they need to pay attention to their security practices and balance security and usability.

8

Section 08

Conclusion: Security is a Continuous Game

The popularization of LLM applications drives the evolution of attack methods; encoding attacks remind us that security protection needs to deeply understand model mechanisms. The study not only reveals specific problems but also provides a thinking framework: the balance between AI system security and functionality requires continuous attention and adjustment, and we should maintain security leadership through testing, learning, and improvement.