Section 01
[Introduction] Inducing Overthinking: A New DoS Attack Paradigm for Large Reasoning Model Systems
The research team discovered an 'overthinking' vulnerability in Large Reasoning Models (LRMs). By constructing adversarial inputs using a hierarchical genetic algorithm, the output length of the model can be increased by up to 26.1 times, forming a new denial-of-service (DoS) attack vector targeting the semantic layer of AI systems. This attack exploits the model's inherent reasoning mechanism, leading to a surge in computational resource consumption and deterioration of service latency. It also features black-box implementability and strong transferability, posing challenges to the security of large models deployed in critical systems.