Zing Forum

Reading

AVGen-Bench: A Task-Driven Benchmark for Multi-Granularity Evaluation of Text-to-Audio-Visual Generation

Microsoft Research Team releases AVGen-Bench, the first comprehensive evaluation benchmark for text-to-audio-visual (T2AV) generation tasks, revealing common semantic controllability flaws in current T2AV models.

T2AV文本到音视频生成多模态评估MLLM音视频基准测试生成式AI语义可控性
Published 2026-04-10 01:59Recent activity 2026-04-10 12:44Estimated read 5 min
AVGen-Bench: A Task-Driven Benchmark for Multi-Granularity Evaluation of Text-to-Audio-Visual Generation
1

Section 01

AVGen-Bench: First Task-Driven Evaluation Benchmark for Text-to-Audio-Visual Generation Released

Microsoft Research Team has released AVGen-Bench, the first comprehensive evaluation benchmark for text-to-audio-visual (T2AV) generation tasks. This benchmark addresses the fragmentation issue in existing evaluations, reveals common semantic controllability flaws in current T2AV models through a multi-granularity framework, and has open-sourced the code and dataset (link: http://aka.ms/avgenbench).

2

Section 02

Background: Fragmentation Dilemma in T2AV Evaluation

T2AV technology has great potential in fields such as advertising, short videos, and game development, but existing evaluation methods are lagging and fragmented: most test audio and video separately, rely on coarse-grained embedding vector similarity, and fail to capture cross-modal semantic consistency (e.g., fine-grained requirements like synchronization between piano keys and notes, matching between rain sounds and raindrop positions).

3

Section 03

AVGen-Bench Framework: Task-Driven Design

The core innovations of AVGen-Bench include: 1. High-quality dataset: covers 11 types of real-world scenarios (music performances, natural sound effects, etc.), with prompts manually verified to ensure clear semantics; 2. Hybrid evaluation architecture: lightweight models evaluate basic perceptual quality (image clarity, audio signal-to-noise ratio), while multi-modal large language models (MLLMs) assess deep semantic understanding (temporal/causal/spatial relationships).

4

Section 04

Key Findings: Significant Gap Between Aesthetics and Semantics

Testing mainstream T2AV models reveals: excellent aesthetic performance, but serious flaws in semantic reliability, including: text rendering failures (garbled or incorrect specific text), lack of speech coherence (broken dialogue semantics, mismatched lip movements), weak physical reasoning (violating physical common sense), and complete collapse of pitch control (unable to accurately generate specified notes/scales).

5

Section 05

Evaluation Method: Multi-Granularity Hierarchical System

AVGen-Bench evaluation is divided into three granularities: 1. Perceptual layer: evaluates basic quality (video clarity, temporal coherence, audio spectral characteristics); 2. Semantic layer: assesses semantic alignment between generated content and prompts (objects, actions, audio-video matching); 3. Controllability layer: evaluates responses to fine-grained instructions (e.g., adjusting rain sound volume, playing speed).

6

Section 06

Industry Implications and Future Directions

Implications: Current technology emphasizes aesthetics over semantics, so it should be used cautiously in scenarios requiring precise semantic control (e.g., advertising brand displays, educational knowledge points). Future directions: Improve models' fine-grained semantic compliance capabilities, especially breakthroughs in pitch control and physical laws. The team has open-sourced AVGen-Bench resources to accelerate community collaboration.