Zing Forum

Reading

V2PE: Enhancing Multimodal Long-Context Understanding via Variable Visual Positional Encoding

The V2PE method proposed by the OpenGVLab team at Shanghai AI Laboratory significantly enhances the ability of vision-language models (VLMs) to handle ultra-long multimodal sequences by introducing variable and smaller positional increments for visual tokens, supporting a context length of up to 1 million tokens.

V2PE视觉语言模型位置编码长上下文多模态ICCV2025OpenGVLabInternVL
Published 2026-04-04 22:28Recent activity 2026-04-04 22:50Estimated read 6 min
V2PE: Enhancing Multimodal Long-Context Understanding via Variable Visual Positional Encoding
1

Section 01

V2PE: A New Approach to Enhance Multimodal Long-Context Understanding

The OpenGVLab team at Shanghai AI Laboratory proposed the V2PE (Variable Visual Positional Encoding) method, which significantly enhances the ability of vision-language models (VLMs) to handle ultra-long multimodal sequences by introducing variable and smaller positional increments for visual tokens, supporting a context length of up to 1 million tokens. This work has been accepted by ICCV 2025.

2

Section 02

Bottlenecks in Multimodal Long-Context Processing

Current VLMs face challenges when processing long-context multimodal inputs. Traditional methods that directly apply positional encoding from large language models to visual tokens are inefficient or even ineffective, especially when inputting large numbers of images or long video sequences, making it difficult to capture the spatial relationships and temporal dependencies of visual elements. The root cause lies in the fundamental differences in information density and sequence characteristics between text and visual tokens: text is a sequence of discrete symbols, while visual tokens contain rich spatial information and have strong internal two-dimensional correlations, and one-dimensional positional encoding ignores the unique structure of visual data.

3

Section 03

Core Innovations and Technical Principles of V2PE

V2PE breaks the convention of standard positional encoding, with the core being the assignment of variable and smaller positional increments to visual tokens. The technical principles include: 1. Variable increment strategy: The positional increment of visual tokens is dynamically adjusted based on image content; 2. Smaller increment step: Compared to text tokens, visual tokens use smaller increments to expand the positional encoding space for fine-grained distinction of relative positions; 3. Maintaining sequence coherence: The positional encoding of the entire sequence (text + visual) increases monotonically to ensure correct cross-modal attention calculation. This work has been accepted by ICCV 2025.

4

Section 04

Experimental Validation and Performance

Based on experiments with the InternVL2-2B model, V2PE shows excellent performance after fine-tuning: 1. General multimodal benchmarks (ChartQA, DocVQA, etc.): Maintains or outperforms the original model's performance; 2. Long-context benchmarks: MM-NIAH average accuracy of 81.8% (baseline: 21.0%), MileBench average score of 72.5% (baseline:49.9%); 3. Ultra-long sequence support: Combined with Ring Attention technology, it can handle sequence lengths of up to 1 million tokens.

5

Section 05

Technical Implementation Details

  1. Training data: Constructed long-context datasets such as long document reading (long_mr series), long-context visual question answering (long_vqa series), and MileBench; 2. Ring Attention: When processing ultra-long sequences of 256K+ tokens, split samples across multiple GPUs to limit memory usage; 3. Open source: The project is fully open source, including training code, model weights (HuggingFace), datasets, evaluation scripts, and reproduction guidelines.
6

Section 06

Application Prospects and Summary

Application prospects: 1. Plug-and-play: Can be applied to existing VLM architectures without large-scale modifications; 2. Resource-friendly: Based on a lightweight model with 2B parameters; 3. Scenario expansion: Supports million-level tokens, can handle entire books, long videos, etc.; 4. Inspirational: Reveals the key role of positional encoding in multimodal modeling. Summary: V2PE enhances multimodal long-context capabilities with minimal architectural changes by optimizing the positional encoding of visual tokens, pushing the technical boundaries of VLMs and providing practical tools for complex tasks.