Zing Forum

Reading

Lumina-DiMOO: A New Paradigm for Multimodal Large Models with Unified Discrete Diffusion Architecture

The Lumina-DiMOO model open-sourced by the Alpha-VLLM team adopts a fully discrete diffusion architecture, unifying the generation and understanding of multimodal tasks such as text and images, and has achieved leading levels among open-source unified multimodal models in multiple benchmark tests.

多模态大模型扩散模型图像生成图像理解离散扩散统一架构开源模型Alpha-VLLM
Published 2025-09-10 08:00Recent activity 2026-05-16 14:48Estimated read 8 min
Lumina-DiMOO: A New Paradigm for Multimodal Large Models with Unified Discrete Diffusion Architecture
1

Section 01

Lumina-DiMOO: A New Paradigm for Multimodal Large Models with Unified Discrete Diffusion Architecture (Introduction)

The Lumina-DiMOO model open-sourced by the Alpha-VLLM team is a multimodal large model using a fully discrete diffusion architecture, designed to unify the generation and understanding of multimodal tasks like text and images. This model has achieved leading levels among open-source unified multimodal models in multiple authoritative benchmark tests, with weights released on HuggingFace, along with complete inference and training code and technical reports.

2

Section 02

Development Dilemmas of Multimodal Large Models (Background)

In recent years, large language models (LLMs) have made breakthroughs in text understanding and generation, but multimodal processing still faces dilemmas: The traditional "visual encoder + large language model" concatenation architecture has issues of information transmission loss and system complexity; existing models often separate generation and understanding capabilities, making it difficult to achieve top-level performance in both; diffusion models and autoregressive (AR) language models have large differences in mechanisms, making seamless integration challenging. These problems limit the application scope of models and hinder the development of artificial general intelligence (AGI).

3

Section 03

Core Technical Innovations of Lumina-DiMOO (Methods)

Unified Discrete Diffusion Architecture

Discretize all modalities into tokens (vector quantization VQ technology for images, tokenizers for text), and model them uniformly through discrete diffusion processes, simplifying the training process, improving inference efficiency, enhancing cross-modal alignment, and facilitating the expansion of new modalities.

Diverse Multimodal Capabilities

Supports tasks such as text-to-image generation, image editing/repair/outpainting, visual question answering, covering the full spectrum of generation and understanding.

Efficient Sampling Mechanism

Designed the Max Logit-based Cache (ML-Cache) mechanism to cache intermediate calculation results, increasing sampling speed by about 2x (inference time on a single A800 card reduced from 58.2 seconds to 32.2 seconds), balancing efficiency and quality through parameters like cache_ratio, warmup_ratio, and refresh_interval.

4

Section 04

Performance of Lumina-DiMOO (Evidence)

Lumina-DiMOO has achieved leading levels among open-source unified multimodal models in multiple authoritative benchmark tests:

  • UniGenBench Leaderboard: Ranked first among open-source unified models in the generation evaluation maintained by Tencent Hunyuan team;
  • GenEval Benchmark: Excellent performance in key metrics such as object attribute binding and spatial relationship understanding;
  • DPG Benchmark: High scores in faithful generation of complex text descriptions;
  • OneIG-EN Benchmark: Strong capabilities in English image generation tasks;
  • TIIF Benchmark: Outstanding performance in text-to-image faithfulness evaluation. In terms of sampling efficiency, the ML-Cache mechanism effectively improves speed while maintaining generation quality.
5

Section 05

Application Scenarios and Practical Value

Creative Design and Content Production

Assists designers in generating high-quality concept maps, quickly modifying materials, and lowering the threshold for creative visualization.

Intelligent Customer Service and Visual Question Answering

As a visual question answering engine, it supports image content understanding and accurate responses in scenarios like e-commerce customer service.

Data Augmentation and Synthetic Training

Generates high-quality synthetic training data, expands datasets to improve the generalization ability of downstream models.

Education and Scientific Research

Open-source resources provide a research foundation for the academic community on unified multimodal architectures, supporting in-depth analysis and improvement exploration.

6

Section 06

Community Ecosystem and Future Development Directions

Community Ecosystem Progress

  • September 2025: Initial version released (model weights, inference code, project homepage);
  • October 2025: Training code open-sourced, Diffusers and ComfyUI support launched;
  • November 2025: Evaluation code based on VLMEvalKit released;
  • December 2025: Research on diffusion MLLM Test-Time Scaling algorithm published;
  • February 2026: Related paper dMLLM-TTS accepted by CVPR 2026.

Future Exploration Directions

  • Support for higher resolutions (4K and above);
  • Expansion to video generation, achieving temporally consistent generation;
  • Efficiency optimization to reduce inference latency and memory usage;
  • Enhancement of generation and understanding capabilities for non-English languages like Chinese.
7

Section 07

Summary and Outlook

Lumina-DiMOO achieves the unification of multimodal generation and understanding through a fully discrete diffusion architecture, which is an important breakthrough in the architectural design of multimodal large models. Its open-source release not only provides a powerful tool but also proves the feasibility and superiority of the unified architecture. In the future, this model is expected to promote the development of multimodal AI towards generality, efficiency, and ease of use, becoming an important reference benchmark in the field.