Zing Forum

Reading

HAWK: A New Breakthrough in Visual Token Pruning for Multimodal Large Models

HAWK proposes a visual token pruning method based on the importance of attention heads. It achieves 80% visual token pruning without training while maintaining 96% of the original accuracy, providing a feasible solution for the real-time deployment of multimodal large models.

多模态大模型视觉Token剪枝注意力机制模型压缩推理优化Qwen2.5-VLTransformer计算效率
Published 2026-04-09 13:09Recent activity 2026-04-10 09:46Estimated read 4 min
HAWK: A New Breakthrough in Visual Token Pruning for Multimodal Large Models
1

Section 01

HAWK: A New Breakthrough in Visual Token Pruning for Multimodal Large Models (Introduction)

HAWK proposes a visual token pruning method based on the importance of attention heads. It achieves 80% visual token pruning without training while maintaining 96% of the original accuracy, providing a feasible solution for the real-time deployment of multimodal large models.

2

Section 02

Background: Efficiency Dilemma of Multimodal Large Models

Multimodal Large Language Models (MLLMs) are powerful, but high-resolution visual inputs lead to an explosive growth in the number of visual tokens, significantly increasing inference latency, computing resource consumption, and GPU memory usage. This makes it difficult to deploy them in real-time scenarios such as autonomous driving, robot control, or instant visual question answering on mobile devices.

3

Section 03

Current Status: Limitations of Traditional Visual Token Pruning

Traditional visual token pruning assumes that all attention heads contribute equally, ignoring the functional differentiation of different attention heads in the Transformer architecture—some focus on shape contours, some on color textures, and others on spatial positional relationships. This leads to failure to consider the differentiated contributions of heads when evaluating token importance.

4

Section 04

Core Innovation of HAWK: Head Importance-Aware Pruning

HAWK evaluates the importance of visual tokens through two dimensions: 1. Head importance weight: dynamically calculates the activation intensity and stability of each attention head to assign weights; 2. Text-guided attention: identifies relevant tokens in combination with the current text task. Moreover, it requires no training and can be directly applied to pre-trained MLLMs, making it plug-and-play.

5

Section 05

Experimental Evidence: Dual Improvement in Performance and Efficiency

In tests on the Qwen2.5-VL model, HAWK maintains 96.0% of the original accuracy after pruning 80.2% of visual tokens; the end-to-end inference time is reduced to 74.4% of the original; and GPU memory usage is significantly reduced, facilitating deployment on resource-constrained devices.

6

Section 06

Conclusion: Technical Significance and Industry Impact

Technically, HAWK provides an effective path to solve the computational efficiency bottleneck of MLLMs and reveals the phenomenon of functional differentiation of attention heads; industrially, the training-free feature is suitable for fast-iterating engineering environments, enabling rapid evaluation of deployment optimizations and shortening the productization cycle.

7

Section 07

Limitations and Future Outlook

Limitations: Currently, it only targets visual token pruning. Future directions: Expand to token optimization for other modalities; optimize the calculation of head importance weights; explore more aggressive compression schemes by combining technologies such as quantization and knowledge distillation.