Zing Forum

Reading

MissRAG: An Innovative RAG Framework for Solving Missing Modality Issues in Multimodal Large Language Models

The MissRAG framework, accepted by ICCV 2025, is the first to apply RAG technology to address missing modality issues in multimodal large models, supporting arbitrary combination retrieval and generation of three modalities: audio, visual, and text.

多模态大语言模型RAG缺失模态ICCV 2025检索增强生成跨模态检索模态感知提示OneLLMChatBridgeVideoLLaMA
Published 2026-03-30 18:26Recent activity 2026-03-30 18:48Estimated read 4 min
MissRAG: An Innovative RAG Framework for Solving Missing Modality Issues in Multimodal Large Language Models
1

Section 01

Introduction: MissRAG Framework—An Innovative Solution to Missing Modality Issues in Multimodal Large Models

The MissRAG framework, accepted by ICCV 2025, is the first to apply RAG technology to address missing modality issues in multimodal large language models (MLLMs). It supports arbitrary combination retrieval and generation of three modalities: audio, visual, and text. By leveraging intelligent retrieval and prompt engineering, it enhances the robustness of existing models without modifying their architecture or retraining them.

2

Section 02

Background and Challenges: The Dilemma of Missing Modalities in Multimodal Systems

Multimodal large language models (MLLMs) perform well in tasks like visual question answering and video understanding. However, in real-world scenarios, modality missing often occurs due to sensor failures, privacy restrictions, etc. Traditional models assume complete modalities, leading to a sharp drop in performance when modalities are missing. This 'missing modality problem' severely limits their reliability and practicality.

3

Section 03

MissRAG Technical Architecture: Cross-Modal Retrieval and Modality-Aware Prompting

Core idea of MissRAG: When modalities are missing, retrieve relevant information from a prototype pool to fill the gap. The technical architecture supports arbitrary combinations of three modalities (audio, video, text), using ImageBind as the embedder to build a unified space. The retrieval strategy flexibly adapts to fixed-length or variable-length representation models. Additionally, modality-aware prompting is introduced to explicitly inform the model of missing modalities and guide the generation process.

4

Section 04

Experimental Validation: Performance Improvement Across Models and Tasks

Evaluated on OneLLM (7B), ChatBridge (13B), and VideoLLaMA 2 (7B), covering tasks such as Music AVQA (audio-visual question answering), Valor/CharadesEGO (description generation), and MOSI/MOSEI (sentiment analysis). Results show that MissRAG effectively mitigates performance loss under missing modalities while maintaining high accuracy and generation quality.

5

Section 05

Practical Significance and Application Prospects: A Breakthrough in Robustness and Versatility

MissRAG provides a lightweight, pluggable solution for the robustness of multimodal systems without retraining models. Its core idea can be extended to more modalities (e.g., depth images, radar data). In privacy scenarios, it allows the system to provide services through retrieval even when users do not provide certain modalities.

6

Section 06

Open Source and Reproducibility: Promoting Community Research

The MissRAG code has been open-sourced on GitHub, including materials for experimental reproduction. Hugging Face has released precomputed modality pools and token datasets, lowering the threshold for reproduction and facilitating community research on the missing modality problem.