Zing Forum

Reading

G²TR: Generation-Guided Visual Token Compression for Multi-Modal Models with Separated Encoders

G²TR is a generation-guided visual token compression framework that evaluates token importance via VAE latent space consistency to achieve balanced selection and redundant merging. Experiments show that this method can reduce visual tokens and pre-filling computation by 1.94x while maintaining inference accuracy and editing quality.

多模态模型视觉Token压缩图像生成模型效率VAE统一多模态模型图像编辑推理优化
Published 2026-05-12 23:56Recent activity 2026-05-13 11:28Estimated read 5 min
G²TR: Generation-Guided Visual Token Compression for Multi-Modal Models with Separated Encoders
1

Section 01

G²TR: Introduction to the Generation-Guided Visual Token Compression Framework

Core Overview of the G²TR Framework

G²TR is a generation-guided visual token compression framework for multi-modal models with separated encoders. It evaluates token importance via VAE latent space consistency to achieve balanced selection and redundant merging. Experiments show that this method can reduce visual tokens and pre-filling computation by 1.94x while maintaining inference accuracy and editing quality.

2

Section 02

Efficiency Bottlenecks of Multi-Modal Models and Challenges of Separated Encoders

Background: Efficiency Bottlenecks and Unique Requirements

Unified Multi-Modal Models (UMMs) drive visual-language fusion, but visual token processing is a major efficiency bottleneck (attention computation complexity grows quadratically). UMMs with separated encoder architectures require token compression while supporting both understanding tasks (e.g., question answering) and generation tasks (e.g., editing). Existing methods only optimize for understanding tasks, which easily leads to a decline in generation performance.

3

Section 03

Core Ideas and Technical Implementation of G²TR

Details of the G²TR Method

Core Insight: The latent space signal from the generation branch (VAE) can provide task-agnostic token importance evaluation.

Three-Step Process:

  1. Token Importance Estimation: Calculate the consistency score between tokens and VAE latent space representations; retain those with high consistency.
  2. Balanced Selection: Ensure the spatial distribution of retained tokens is uniform.
  3. Redundant Merging: Merge redundant token information into adjacent retained tokens to minimize information loss.

This method does not require fine-tuning and is compatible with existing UMM architectures.

4

Section 04

Experimental Results and Performance Analysis of G²TR

Experimental Results: Balance Between Efficiency and Performance

  • Efficiency Improvement: Visual tokens are reduced by about half, and pre-filling computation is decreased by 1.94x.
  • Performance Preservation: The accuracy of understanding tasks (question answering, image captioning) is comparable to that of uncompressed models; the quality of generation/editing tasks does not decline, especially performing well in fine-grained spatial editing tasks.
5

Section 05

Conclusions and Implications for Model Design of G²TR

Conclusions and Implications

Conclusion: G²TR achieves task-agnostic token compression guided by VAE, which is efficient and maintains multi-task performance.

Implications:

  1. Generation tasks can provide supervision signals for understanding tasks.
  2. Task-agnostic compression is feasible (a general solution).
  3. Compression needs to focus on spatial distribution and task coverage.
6

Section 06

Limitations and Future Directions of G²TR

Limitations and Future Research

Limitations:

  1. The compression ratio is limited by the VAE architecture.
  2. Not extended to video temporal data.
  3. Token merging has additional computational overhead.

Future Directions: Optimize VAE compatibility, extend to video, and reduce merging overhead.