Section 01
Introduction: TwigVLM—An Innovative Method to Accelerate Large Vision-Language Models via Model Pruning
Large Vision-Language Models (LVLMs) are powerful in multimodal tasks, but their massive scale leads to high inference costs. The ICCV 2025 paper TwigVLM proposes the "growing twigs" structured pruning methodology, which significantly improves inference speed while maintaining over 95% of the original performance, providing a key solution for LVLMs to move towards practical applications.