Zing Forum

Reading

V-CAST: Curvature-Aware Spatio-Temporal Pruning Method for Efficient Video Large Language Models

V-CAST proposes a training-free, plug-and-play Token pruning strategy for video large language models. Through a curvature-guided temporal allocation mechanism and a dual-anchor spatial selection mechanism, it maintains 98.6% of the original performance while reducing peak memory and total latency to 86.7% and 86.4% of the Qwen3-VL-8B-Instruct baseline, respectively.

视频大语言模型Token压缩时空剪枝曲率感知视觉Token视频理解多模态模型推理优化Qwen3-VLMRoPE
Published 2026-03-29 19:53Recent activity 2026-03-31 09:53Estimated read 5 min
V-CAST: Curvature-Aware Spatio-Temporal Pruning Method for Efficient Video Large Language Models
1

Section 01

V-CAST: Guide to Curvature-Aware Spatio-Temporal Pruning Method for Efficient Video Large Language Models

V-CAST proposes a training-free, plug-and-play Token pruning strategy for video large language models. Through a curvature-guided temporal allocation mechanism and a dual-anchor spatial selection mechanism, it maintains 98.6% of the original performance while reducing peak memory and total latency to 86.7% and 86.4% of the Qwen3-VL-8B-Instruct baseline, respectively, solving the Token explosion problem.

2

Section 02

Efficiency Challenges of Video Large Language Models and Dilemmas in Token Compression

Efficiency Challenges

VideoLLMs perform strongly across multiple scenarios, but the massive volume of video data leads to Token explosion, resulting in a large context during the pre-filling phase and a sharp increase in computation and memory overhead.

Dilemmas in Token Compression

Limitations of existing methods: Coarse-grained frame-by-frame allocation ignores content dynamics; scene segmentation easily causes information fragmentation; Token merging leads to MRoPE coordinate misalignment, affecting spatio-temporal reasoning.

3

Section 03

Core Innovations of V-CAST: Curvature Guidance and Dual-Anchor Selection

Curvature-Guided Temporal Allocation

Model Token compression as trajectory approximation, using curvature to reflect content changes: identify high-curvature semantic turning points, perceive event boundaries, and dynamically allocate Token budgets (fewer allocations for smooth segments, more for intense segments).

Dual-Anchor Spatial Selection

Preserve high-entropy visual regions without interfering with attention, maintain the original spatio-temporal coordinates of Tokens, and avoid coordinate misalignment.

4

Section 04

Experimental Results of V-CAST: Balance Between Accuracy and Efficiency

Accuracy Preservation

Achieves 98.6% of the original performance across multiple tasks, with an average improvement of 1.1% over the second-best method.

Efficiency Improvement

Peak memory is reduced to 86.7% of the baseline, and total latency to 86.4%.

Cross-Architecture Compatibility

Training-free and plug-and-play, applicable to VideoLLMs of different architectures and scales.

5

Section 05

Practical Application Value of V-CAST

Facilitates:

  • Real-time video analysis (low latency supports real-time responses);
  • Edge device deployment (reduces memory usage);
  • Long video processing (avoids Token explosion);
  • Cloud cost optimization (improves efficiency and reduces costs).
6

Section 06

Limitations and Future Directions of V-CAST

Limitations

Curvature calculation incurs additional preprocessing overhead.

Future Directions

Optimize curvature calculation, explore synergy with fine-tuning, integrate audio cues, and support dynamic resolution input.

7

Section 07

Conclusion: V-CAST Promotes Efficient Deployment

V-CAST balances accuracy and efficiency through innovative mechanisms, and its training-free, plug-and-play feature allows for rapid application, paving the way for the practical deployment and scaling of video understanding technologies.