Zing Forum

Reading

OptMerge: Research on Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging

The OptMerge project, accepted by ICLR 2026, proposes an innovative multimodal large language model merging method that can integrate capabilities of different modalities without retraining, enabling unified processing of multiple modalities such as vision, audio, and video.

多模态大语言模型模型合并Model MergingTIES-Merging视觉语言模型音频理解视频理解ICLR 2026参数融合多模态学习
Published 2026-05-08 16:12Recent activity 2026-05-08 16:19Estimated read 3 min
OptMerge: Research on Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging
1

Section 01

OptMerge Project Introduction (Accepted by ICLR 2026)

OptMerge is a research project accepted by ICLR 2026. It proposes an innovative multimodal large language model merging method that can integrate the capabilities of multiple modalities such as vision, audio, and video without retraining, addressing the core challenges of high costs and loss of specialized capabilities caused by independent training of models for different modalities.

2

Section 02

Research Background and Motivation

Multimodal Large Language Models (MLLMs) are developing rapidly, but models for different modalities are usually trained independently with their own parameters and architectures. Traditional unified model training is costly and loses the specialized capabilities of each modality. OptMerge explores the technical path of model merging, integrating the capabilities of multiple single/multimodal expert models through parameter fusion without joint training from scratch.

3

Section 03

Core Technical Innovations

Principles of Model Merging Technology

  1. Task Vector Method: Calculate the difference between the fine-tuned model and pre-trained weights (task vector), then add the weighted average back to the pre-trained weights.
  2. TIES-Merging Strategy: Includes three steps: pruning (filtering small updates to reduce noise), sign voting (selecting the dominant direction of parameter updates), and disjoint merging (only merging updates with consistent signs).
  3. Multimodal Capability Integration: Supports integrating vision (CLIP)
4

Section 04

Introduction / Main Post: OptMerge: Research on Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging

The OptMerge project, accepted by ICLR 2026, proposes an innovative multimodal large language model merging method that can integrate capabilities of different modalities without retraining, enabling unified processing of multiple modalities such as vision, audio, and video.