Zing Forum

Reading

Audio-Omni: The First All-Round Framework Unifying Audio Understanding, Generation, and Editing

Audio-Omni is the first end-to-end unified framework that enables generation and editing across general sound, music, and speech domains, while integrating multimodal understanding capabilities. This framework combines a frozen multimodal large language model for high-level reasoning and a trainable diffusion Transformer for high-fidelity synthesis, achieving state-of-the-art performance in multiple benchmark tests.

Audio-Omni音频生成音频编辑多模态模型扩散Transformer语音合成音乐生成统一框架
Published 2026-04-13 00:08Recent activity 2026-04-14 11:22Estimated read 6 min
Audio-Omni: The First All-Round Framework Unifying Audio Understanding, Generation, and Editing
1

Section 01

[Introduction] Audio-Omni: The First All-Round Framework Unifying Audio Understanding, Generation, and Editing

Audio-Omni is the first end-to-end unified framework that enables generation and editing across general sound, music, and speech domains, while integrating multimodal understanding capabilities. Its core architecture combines a frozen multimodal large language model (responsible for high-level semantic reasoning) and a trainable diffusion Transformer (responsible for high-fidelity synthesis), achieving state-of-the-art performance in multiple benchmark tests and providing a key breakthrough for the audio AI field to move toward general generative intelligence.

2

Section 02

[Background] The Fragmentation Dilemma of Audio AI

Current audio AI capabilities are mostly handled by independent models, lacking a unified framework to integrate the three core tasks of understanding, generation, and editing. Fragmentation brings many problems: developers need to maintain multiple APIs and data formats, and information isolation between models limits cross-task collaboration (e.g., understanding models cannot directly guide editing).

3

Section 03

[Methodology] The Groundbreaking Architecture of Audio-Omni

The core of the Audio-Omni architecture consists of two complementary components working in synergy:

  1. Frozen Multimodal Large Language Model (MLLM):Serves as a semantic understanding engine, parsing natural language instructions and understanding audio semantics, leveraging pre-trained knowledge to avoid full fine-tuning costs;
  2. Trainable Diffusion Transformer:Acts as a high-fidelity synthesis engine, generating high-quality audio (covering sound effects, music, speech) through step-by-step denoising;
  3. Collaboration Mechanism:The MLLM outputs high-level semantic representations to guide the diffusion Transformer's generation, enabling precise execution of complex instructions (e.g., style transfer, accent adjustment).
4

Section 04

[Evidence] Dataset Support and Performance

Dataset: The team built the large-scale AudioEdit dataset (over 1 million editing pairs), which ensures diversity and quality through automatic filtering and manual verification, solving the problem of scarce audio editing data. Performance: In audio understanding, generation, and editing benchmark tests, it outperforms all previous unified methods, and its performance is comparable to or even better than specialized expert models, verifying the effectiveness of the unified architecture.

5

Section 05

[Highlights] Emergent General Capabilities of Audio-Omni

Audio-Omni demonstrates emergent capabilities without targeted training:

  1. Knowledge-Enhanced Reasoning Generation: Uses the MLLM knowledge base to generate audio that conforms to specific styles (e.g., Baroque organ music);
  2. Contextual Learning Generation: Quickly masters new styles/editing patterns from a small number of examples without additional fine-tuning;
  3. Zero-Shot Cross-Language Control: Supports non-English instructions (e.g., Chinese, Japanese) due to the MLLM's multilingual pre-training foundation.
6

Section 06

[Conclusion and Outlook] Toward General Generative Audio Intelligence

Audio-Omni marks the audio AI's move toward general generative intelligence, simplifying the development process and enabling cross-modal applications (e.g., video soundtrack generation, script audio creation). The team will open-source the code, models, and datasets to accelerate progress in the field. Its success proves that a unified architecture can balance multi-tasking and high performance, providing a reference for modeling other modalities and the development of general AI.