Zing Forum

Reading

BoxTuning: Reshaping the Object Understanding Paradigm of Video Multimodal Large Models by Directly Injecting Target Bounding Boxes via Visual Prompts

BoxTuning proposes an innovative visual prompting method that directly renders colored bounding boxes and motion trajectories onto video frames, addressing the modality mismatch issue in the traditional text-coordinate paradigm. It achieves an 87-93% reduction in text tokens while maintaining full temporal resolution, outperforming existing baselines on five video question-answering benchmarks.

BoxTuning多模态大模型视频问答视觉提示物体定位边界框轨迹编码模态对齐时空理解MLLM
Published 2026-04-13 15:49Recent activity 2026-04-14 10:48Estimated read 6 min
BoxTuning: Reshaping the Object Understanding Paradigm of Video Multimodal Large Models by Directly Injecting Target Bounding Boxes via Visual Prompts
1

Section 01

BoxTuning: A New Paradigm for Reshaping Object Understanding in Video Multimodal Models

BoxTuning proposes an innovative visual prompting method that directly renders colored bounding boxes and motion trajectories onto video frames, addressing the modality mismatch issue in the traditional text-coordinate paradigm. This method achieves an 87-93% reduction in text tokens while maintaining full temporal resolution, outperforming existing baselines on five video question-answering benchmarks and providing a new paradigm for object understanding in video multimodal large models.

2

Section 02

Background: Challenges of Object Localization in Video Understanding and Limitations of Existing Solutions

Video Question-Answering (Video QA) requires models to have fine-grained object-level spatiotemporal understanding capabilities. However, existing Multimodal Large Language Models (MLLMs) adopt a holistic encoding strategy and lack an explicit object localization mechanism. To address this gap, recent studies serialize bounding box coordinates into text tokens for input, but this leads to modality mismatch issues: first, the high token cost of coordinate sequences forces temporal downsampling; second, dynamic information is lost, affecting the understanding of motion characteristics.

3

Section 03

Core Innovation of BoxTuning: Injecting Object Spatiotemporal Information via Visual Prompts

BoxTuning directly injects object spatiotemporal information into the visual modality:

  1. Colored Bounding Box Rendering: Assign a unique color to each object, and label it with a semi-transparent rectangle on the original frame to preserve visual context;
  2. Motion Trajectory Encoding: Use gradient-colored lines on keyframes to show motion paths, intuitively encoding direction, speed (length/density), and acceleration (curvature change);
  3. Minimalist Text Legend: Only retain color-object name mappings (e.g., "Red = Ball A"), reducing text tokens by 87-93%.
4

Section 04

In-depth Interpretation of BoxTuning's Technical Advantages

BoxTuning's advantages are reflected in:

  1. Natural Modality Alignment: Visual information is transmitted through the visual channel, which aligns with human perception;
  2. Full Temporal Resolution: No need for temporal downsampling, preserving fine-grained motion information between frames;
  3. Optimized Computational Efficiency: Reducing text tokens lowers the burden on language models, shortens the context window, and allows focus on high-level reasoning.
5

Section 05

Experimental Validation: BoxTuning's Excellent Performance on Multiple Video QA Benchmarks

The research team tested BoxTuning on five video QA benchmarks: CLEVRER (physical reasoning), Perception Test (basic perception), STAR (spatiotemporal reasoning), NExT-QA (long video understanding), and IntentQA (intent understanding). The results show that BoxTuning significantly outperforms text-coordinate baselines on space-oriented tasks, and almost eliminates the accuracy drop of traditional methods in reasoning-intensive tasks.

6

Section 06

Insights from BoxTuning for Multimodal Model Design

BoxTuning provides insights for multimodal model design:

  1. Necessity of Paradigm Shift: Respect the characteristics of modalities and avoid complexity and loss caused by forced information conversion;
  2. Potential of Visual Encoding: The visual channel has higher information density than text; full utilization can improve efficiency;
  3. Explicit Modeling of Dynamic Information: Spatializing temporal information through motion trajectories provides new ideas for processing sequential data.
7

Section 07

Limitations of BoxTuning and Future Exploration Directions

BoxTuning still has open issues:

  1. Scalability to Complex Scenarios: Colored bounding boxes may cause visual clutter when there are multiple objects;
  2. Multimodal Fusion: Need to expand to other sensory channels such as audio and haptics;
  3. End-to-End Learning: Explore enabling models to autonomously learn visual prompt generation.