Zing Forum

Reading

Panoramic Map of Multimodal Models: The Evolution of Architectures from MLLM to NMM

The Awesome Multimodal Modeling resource list systematically organizes the development of multimodal AI, covering three core paradigms—multimodal large language models, unified multimodal models, and native multimodal models—providing researchers with a clear classification system and architecture comparison.

多模态模型MLLM统一多模态模型原生多模态模型视觉语言模型多模态AI架构演进Awesome列表
Published 2026-04-13 16:59Recent activity 2026-04-13 17:22Estimated read 6 min
Panoramic Map of Multimodal Models: The Evolution of Architectures from MLLM to NMM
1

Section 01

Panoramic Map of Multimodal Models: Introduction to the Evolution of Architectures from MLLM to NMM

Based on the Awesome Multimodal Modeling resource list maintained by OpenEnvision, this article systematically organizes the development of multimodal AI, covering four evolutionary stages—traditional multimodal models, multimodal large language models (MLLM), unified multimodal models (UMM), and native multimodal models (NMM)—as well as three core paradigms (MLLM, UMM, NMM). It provides researchers with a clear classification system and architecture comparison, helping to clarify the technical evolution path of the field.

2

Section 02

Current Status of Multimodal AI and the Problem of Conceptual Confusion

The multimodal AI field has evolved rapidly from early image-text alignment to video understanding, audio generation, and cross-modal reasoning, but this progress is accompanied by conceptual confusion: the definitions and differences between MLLM, UMM, and NMM are unclear, and the technical considerations behind architecture design lack systematic organization. The Awesome Multimodal Modeling resource list emerged to address these issues—it is not just a collection of papers and projects, but also a knowledge graph for multimodal AI.

3

Section 03

Four Evolutionary Stages of Multimodal Models

The resource list divides multimodal models into four stages:

  1. Traditional Multimodal Models: Focus on representation learning and modal alignment, task-specific with no unified architecture;
  2. MLLMs: Based on pre-trained LLMs, graft visual capabilities via visual adapters (e.g., Q-Former, cross-attention), essentially "language model + visual plug-ins", with limitations of modal asymmetry;
  3. UMMs: Unified architecture for all modalities, divided into three generation paradigms: diffusion, autoregressive, and hybrid;
  4. NMMs: Native multimodal pre-training, end-to-end unified architecture, divided into early fusion (e.g., Gemini) and late fusion strategies.
4

Section 04

Comparative Analysis of the Three Paradigms' Architectures

Dimension MLLM UMM NMM
Training Cost Low (frozen LLM) Medium (multimodal pre-training) High (native multimodal pre-training)
Modal Symmetry Low (language-dominant) High High
Generation Capability Limited (mainly text) Strong (multimodal generation) Strong (multimodal generation)
Inference Efficiency High Medium Depends on architecture design
Applicable Scenarios Visual understanding, VQA Multimodal generation, editing General-purpose multimodal assistant
5

Section 05

Core Value of the Awesome Multimodal Modeling Resource List

The value of this resource list is reflected in:

  1. Systematic Classification: A clear framework to help quickly locate research directions;
  2. Evolution Timeline: Organized by stages to show technical context, aiding trend understanding;
  3. Visualization Support: Architecture diagrams and comparison tables lower the threshold for technical understanding;
  4. Continuous Updates: The open-source project follows the latest research to maintain timeliness.
6

Section 06

Practical Recommendations for Multimodal Researchers

Based on the resource list framework, researchers can refer to the following recommendations:

  1. Establish a Panoramic Vision: Understand that the three paradigms are not substitutes but scenario-adapted solutions;
  2. Focus on Trade-offs: Weigh design trade-offs such as computational efficiency vs. sufficiency of modal interaction;
  3. Track NMM Progress: Native multimodal models are the future direction—follow open-source community dynamics;
  4. Combine Practice with Theory: Deepen understanding through experiments with representative models (e.g., LLaVA, Stable Diffusion).