Zing Forum

Reading

OmniSelect: Dynamic Perception Token Compression Technology for Multimodal Large Models

OmniSelect is a training-free token compression framework for multimodal large language models. By dynamically allocating the importance ratio of audio and video, it achieves 1.19-1.33x inference acceleration and over 2.5GB GPU memory savings while maintaining 94%-99% accuracy.

多模态大语言模型Token压缩视频理解音频处理模型优化推理加速开源项目
Published 2026-05-17 11:41Recent activity 2026-05-17 11:55Estimated read 7 min
OmniSelect: Dynamic Perception Token Compression Technology for Multimodal Large Models
1

Section 01

Introduction / Main Floor: OmniSelect: Dynamic Perception Token Compression Technology for Multimodal Large Models

OmniSelect is a training-free token compression framework for multimodal large language models. By dynamically allocating the importance ratio of audio and video, it achieves 1.19-1.33x inference acceleration and over 2.5GB GPU memory savings while maintaining 94%-99% accuracy.

2

Section 02

Efficiency Dilemma of Multimodal Large Models

With the rapid development of multimodal large language models such as GPT-4V, Gemini, and Qwen2.5-Omni, AI can now understand text, images, audio, and video simultaneously. However, this capability comes at an enormous computational cost—an input containing a few minutes of video can generate tens of thousands of visual tokens, plus audio tokens, easily exceeding the model's context window limit.

Traditional solutions uniformly compress tokens of all modalities, but this ignores a key fact: different queries depend on audio and video to varying degrees. Some questions mainly require video information, some rely on audio, and others need a combination of both.

How to efficiently compress multimodal tokens while maintaining model performance? The OmniSelect project proposes an innovative dynamic modality-aware compression scheme.

3

Section 03

Project Overview

OmniSelect is a fully training-free multimodal token compression framework designed for full-modal large language models. Unlike existing compression methods that use fixed modality guidance, OmniSelect can dynamically determine the relative importance of audio, video, or both based on the current query and allocate compression ratios accordingly.

The core innovation of the project lies in the introduction of a dynamic modality-aware ratio allocation mechanism combined with a time-group pruning pipeline technology, which significantly reduces computational overhead while maximizing the retention of information useful for the current query.

4

Section 04

Dynamic Modality-Aware Ratio Allocation

The first stage of OmniSelect uses the AudioCLIP model to estimate the relevance of the query to audio and video. Based on this estimation, the system dynamically selects one of three pruning strategies:

  • Video-centric pruning: When the query mainly relies on visual information, more video tokens are retained, and audio tokens are significantly compressed
  • Audio-centric pruning: When the query mainly relies on auditory information, more audio tokens are retained, and video tokens are significantly compressed
  • Uniform pruning: When the query depends equally on both modalities, a balanced compression strategy is adopted

This dynamic allocation mechanism ensures that the limited token budget is used on the most relevant modalities instead of being mechanically evenly distributed.

5

Section 05

Time-Group Pruning Pipeline

The second stage performs actual token pruning,包含 two key steps:

Audio Token Pruning: Uses an attention-guided mechanism to identify and retain the most important audio segments for the current query. By analyzing the attention weight distribution, the system can locate key time windows and remove silent or irrelevant audio parts.

Visual Token Pruning: Based on the Bottom-K similarity algorithm, retains visual tokens most semantically relevant to the query. This method calculates the similarity between visual features and the query, prioritizing the retention of image regions with the highest information content.

6

Section 06

Performance and Experimental Results

OmniSelect has demonstrated excellent performance on multiple multimodal benchmarks:

7

Section 07

Inference Efficiency Improvement

  • Inference speed: 1.19x to 1.33x acceleration
  • Memory usage: 2.58GB to 2.77GB reduction in GPU memory
  • Accuracy retention: 94% to 99% accuracy compared to the full token setup
8

Section 08

Benchmark Comparison

In the WorldSense benchmark (30% token retention rate):

  • Full tokens: 45.62% accuracy
  • OmniZip (comparison method): 41.83% accuracy
  • OmniSelect: 44.42% accuracy

In the DailyOmni benchmark (45% token retention rate):

  • Full tokens: 62.82% accuracy
  • OmniZip (comparison method): 56.14% accuracy
  • OmniSelect: 58.06% accuracy

It can be seen that at the same compression ratio, OmniSelect significantly outperforms fixed-strategy compression methods and is close to the performance of the full token setup.