Zing Forum

Reading

Omni123: Using 2D Data to Compensate for 3D Data Scarcity, A Native Foundation Model Unifying Text-to-2D and Text-to-3D Generation

Omni123 proposes a 3D-native foundation model that unifies text-to-2D and text-to-3D generation by representing text, images, and 3D as discrete tokens in a shared sequence space, and using abundant 2D data as geometric priors to improve 3D representations.

3D生成多模态学习自回归模型跨模态一致性文本到3D2D到3D基础模型计算机视觉
Published 2026-04-03 01:29Recent activity 2026-04-03 12:17Estimated read 4 min
Omni123: Using 2D Data to Compensate for 3D Data Scarcity, A Native Foundation Model Unifying Text-to-2D and Text-to-3D Generation
1

Section 01

Omni123: Using 2D Data to Compensate for 3D Scarcity, A Native Model Unifying Text-to-2D and Text-to-3D Generation

Omni123 proposes a 3D-native foundation model that unifies text-to-2D and text-to-3D generation and addresses the problem of 3D data scarcity. It does this by representing text, images, and 3D as discrete tokens in a shared sequence space, and using 2D data as geometric priors to improve 3D representations.

2

Section 02

Background and Challenges: Data Dilemma in 3D Generation

Multimodal large language models have made progress in text and image generation, but expanding to 3D faces the problem of scarce high-quality 3D data. Existing methods mostly use indirect processes (generating 2D then optimizing to 3D), which easily sacrifice geometric consistency and lead to contradictions between different perspectives.

3

Section 03

Core Insights and Architecture: Cross-Modal Unified Representation and Autoregressive Framework

Core Insight: Cross-modal consistency between images and 3D can serve as an implicit structural constraint. Omni123 unifies text, images, and 3D into shared discrete tokens, adopts an autoregressive framework, supports multimodal mixed sequences, and learns complex cross-modal relationships.

4

Section 04

Innovative Training Paradigm: X-to-X Interleaved Training

An X-to-X interleaved training paradigm is introduced, which does not require fully aligned text-image-3D triples. It coordinates multiple cross-modal tasks (text→image, image→3D, etc.), enabling efficient data utilization, consistency learning, and improved robustness.

5

Section 05

Joint Optimization of Three Constraints: Semantics, Appearance, and Geometry

Three constraints are achieved through training: semantic alignment (generated 3D accurately reflects text semantics), appearance fidelity (rendered images have high visual quality), and multi-view geometric consistency (consistent structure across different perspectives).

6

Section 06

Experimental Validation: Significant Performance Improvement

Experiments show that Omni123 achieves improved performance in text-guided 3D generation and editing tasks. The generated models outperform existing methods in geometric consistency, semantic accuracy, and visual quality, demonstrating a path toward expanding to multimodal 3D world models.

7

Section 07

Significance and Outlook: Cross-Modal Transfer and Future Directions

Theoretically, it proves the feasibility of using data-rich modalities (2D) to assist the learning of scarce modalities (3D). Practically, it provides new tools for fields such as 3D creation and VR. Future directions include complex scenes, 4D dynamic content, and multimodal fusion.