Zing Forum

Reading

Inducing Overthinking: A New DoS Attack Paradigm Against Large Reasoning Model Systems

The research team discovered an 'overthinking' vulnerability in large reasoning models. By constructing adversarial inputs using a hierarchical genetic algorithm, the output length of the model can be increased by 26 times, forming a new denial-of-service (DoS) attack vector.

大推理模型对抗攻击DoS攻击AI安全思维链遗传算法黑盒攻击模型安全
Published 2026-05-13 18:57Recent activity 2026-05-14 10:49Estimated read 7 min
Inducing Overthinking: A New DoS Attack Paradigm Against Large Reasoning Model Systems
1

Section 01

[Introduction] Inducing Overthinking: A New DoS Attack Paradigm for Large Reasoning Model Systems

The research team discovered an 'overthinking' vulnerability in Large Reasoning Models (LRMs). By constructing adversarial inputs using a hierarchical genetic algorithm, the output length of the model can be increased by up to 26.1 times, forming a new denial-of-service (DoS) attack vector targeting the semantic layer of AI systems. This attack exploits the model's inherent reasoning mechanism, leading to a surge in computational resource consumption and deterioration of service latency. It also features black-box implementability and strong transferability, posing challenges to the security of large models deployed in critical systems.

2

Section 02

Background: The 'Overthinking' Tendency of Large Reasoning Models

Large reasoning models (such as OpenAI's o-series, DeepSeek-R1, etc.) demonstrate powerful multi-step reasoning capabilities through the Chain-of-Thought mechanism. However, when faced with incomplete or logically inconsistent inputs, they fall into a state of 'repeated pondering' and generate abnormally lengthy reasoning traces. While this trait may reflect 'caution', it becomes an exploitable vulnerability under malicious inputs.

3

Section 03

Attack Principle: DoS Attack Caused by Logical Perturbation Leading to Reasoning Inflation

The core of the attack is to systematically perturb the logical structure of the input, triggering the model's overthinking mechanism and leading to a surge in response length. Its characteristics include: a maximum 26.1-fold increase in response length (on the MATH benchmark), a significant rise in GPU computation time and energy consumption, deterioration of service latency, and black-box implementability (no need for internal model access). Unlike traditional network-layer DoS attacks, this attack targets the semantic layer—inputs are syntactically valid but logically flawed, making them harder to detect and defend against.

4

Section 04

Technical Implementation: Automated Attack Framework Using Hierarchical Genetic Algorithm (HGA)

The research uses a Hierarchical Genetic Algorithm (HGA) to construct adversarial inputs: 1. Structured problem decomposition (e.g., known conditions, objectives, intermediate steps of mathematical problems); 2. Composite fitness function (simultaneously optimizing response length and overthinking markers such as language patterns of repeated pondering and self-doubt); 3. Black-box optimization strategy (optimization only through input-output feedback, close to real-world scenarios of commercial APIs).

5

Section 05

Experimental Validation: Cross-Model Effectiveness and Transferability

Validated on four advanced large reasoning models: 1. Adversarial inputs significantly increase response length; 2. The attack has strong transferability (adversarial inputs generated by small proxy models are effective against large commercial models like GPT-4 and Claude); 3. The effect of HGA-optimized samples is far better than manually constructed baselines with missing premises, indicating that the overthinking vulnerability has complex patterns.

6

Section 06

Defense Recommendations: Potential Strategies to Counter Overthinking Attacks

Potential defense directions include: 1. Input validation and filtering (checking logical consistency, requiring users to clarify incomplete/contradictory inputs); 2. Reasoning length limitation (setting token or step limits, interrupting and returning results when exceeding thresholds); 3. Anomaly detection (monitoring statistical features of reasoning patterns to identify abnormal thinking behaviors); 4. Adversarial training (introducing adversarial samples to enhance robustness).

7

Section 07

Conclusions and Implications: AI Security Needs to Focus on System-Level Behavioral Attacks

This study reveals the systemic challenges of AI security: the enhancement of model capabilities is accompanied by new attack surfaces, and the overthinking vulnerability is related to the reasoning mechanism itself. When deploying critical systems, it is necessary to comprehensively evaluate security features—not only focusing on traditional adversarial samples but also considering attacks that consume computational resources. The industry is called upon to pay attention to this vulnerability, incorporate it into the security considerations of product design and deployment, and build reliable AI systems through attack-defense games.