Zing Forum

Reading

LinguDistill: Restoring Language Capabilities of Vision-Language Models via Cross-Modal Distillation

When adapting pre-trained language models to vision-language models (VLMs), their language capabilities often degrade due to representation shift and cross-modal interference. LinguDistill proposes an adapter-free distillation method: by sharing inter-layer KV cache to use the frozen original language model as a teacher, and performing selective distillation on language-dense data, it successfully recovers approximately 10% of the lost language performance.

视觉语言模型知识蒸馏KV缓存共享语言能力恢复跨模态学习多模态适配表示偏移选择性蒸馏
Published 2026-04-01 20:38Recent activity 2026-04-02 09:51Estimated read 7 min
LinguDistill: Restoring Language Capabilities of Vision-Language Models via Cross-Modal Distillation
1

Section 01

LinguDistill: Restoring Language Capabilities of Vision-Language Models via Cross-Modal Distillation (Introduction)

When adapting pre-trained language models to vision-language models (VLMs), their language capabilities often degrade due to representation shift and cross-modal interference. LinguDistill proposes an adapter-free distillation method: by sharing inter-layer KV cache to use the frozen original language model as a teacher, and performing selective distillation on language-dense data, it successfully recovers approximately 10% of the lost language performance without affecting visual capabilities.

2

Section 02

Hidden Costs of Multimodal Adaptation (Background)

Vision-language models (VLMs) can perform complex tasks like image captioning and visual question answering, but the adaptation process has hidden costs: the pure language capabilities of the original language model decrease significantly (e.g., lower scores on benchmarks like HellaSwag and ARC). The reasons include: representation shift (visual feature mapping changes the structure of language representations), cross-modal interference (visual information occupies language computing resources), and imbalanced fine-tuning data (low proportion of pure text data leads to blurred language memory).

3

Section 03

Limitations of Existing Solutions

Facing language capability degradation, existing solutions have limitations: 1. Introducing additional modules like adapters increases architectural complexity and inference overhead, and has strong assumptions; 2. Task-specific fine-tuning has limited effect, making it hard to return to the original level, and may also affect visual capabilities.

4

Section 04

Core Methods of LinguDistill

Core Idea

Use the original language model as a teacher to transfer language capabilities via knowledge distillation, without modifying the architecture or increasing inference overhead.

Inter-layer KV Cache Sharing Mechanism

The KV cache of each Transformer layer in the student model (VLM) is shared with the frozen teacher model (original language model), allowing the teacher to perceive visual information while keeping its parameters frozen and language capabilities intact.

Selective Cross-Modal Distillation

Distill only on language-dense data, focusing on restoring language capabilities without affecting visual grounding capabilities. During training, the student generates predictions, the teacher generates the gold standard via KV cache, and the distillation loss guides the student to align with the teacher.

5

Section 05

Experimental Results: Significant Language Capability Recovery (Evidence)

LinguDistill recovers approximately 10% of the lost performance on pure language benchmarks like HellaSwag, ARC, and MMLU, approaching the level of the original language model; performance on multimodal tasks like visual question answering and image captioning remains unchanged. Compared to baselines: it has lower overhead than adapter methods, and more significant effects than task fine-tuning without impairing visual capabilities.

6

Section 06

Technical Insights and Methodological Significance (Conclusion)

  1. The language knowledge of the original language model is a valuable asset that can be preserved via distillation; 2. KV cache sharing is a lightweight cross-modal information transfer method; 3. Selective distillation emphasizes the importance of task orientation. Methodologically, it embodies the idea of "returning to the origin": using the power of the original model rather than adding new components.
7

Section 07

Limitations and Future Directions (Suggestions)

Limitations

  • Relies on the original language model as a teacher, which may not be applicable in proprietary API scenarios;
  • Implementing KV cache sharing requires fine control of the model's internal state, which is highly intrusive.

Future Directions

  • Explore more flexible teacher-student interaction mechanisms;
  • Study multimodal data distillation strategies to improve both language and visual capabilities simultaneously;
  • Extend to other modal combinations like audio-language.
8

Section 08

Application Prospects and Conclusion

Application Prospects

  • Quickly restore language capabilities for already trained VLMs to improve real-world application performance;
  • Friendly to resource-constrained scenarios (no additional inference parameters, suitable for edge deployment).

Conclusion

LinguDistill solves the problem of VLM language capability degradation in a concise way, providing new ideas for building balanced and powerful multimodal systems, and is inspiring for multimodal AI to coordinate relationships between different modalities.