Zing Forum

Reading

DPC-VQA: Decoupling Perception and Calibration for Efficient Adaptation to New Video Quality Assessment Scenarios

This article introduces the DPC-VQA framework, which provides basic quality estimation by freezing a Multimodal Large Language Model (MLLM), uses a lightweight calibration branch to predict residual corrections, and achieves video quality assessment without end-to-end retraining. It can reach competitive performance with only 2% trainable parameters and 20% labeled data.

DPC-VQA视频质量评估多模态大模型残差校准参数高效微调UGCAIGCMOS标注
Published 2026-04-14 22:40Recent activity 2026-04-15 10:03Estimated read 8 min
DPC-VQA: Decoupling Perception and Calibration for Efficient Adaptation to New Video Quality Assessment Scenarios
1

Section 01

[Introduction] DPC-VQA: An Efficient Video Quality Assessment Framework with Decoupled Perception and Calibration

DPC-VQA (Decoupling Perception and Calibration for VQA) is a framework that efficiently adapts to new video quality assessment (VQA) scenarios. It provides basic quality estimation by freezing a Multimodal Large Language Model (MLLM), and combines a lightweight residual calibration branch to predict correction values, enabling VQA without end-to-end retraining. With only 2% trainable parameters and 20% labeled data, it achieves competitive performance, solving problems like high cost and difficulty in scenario transfer of traditional VQA methods.

2

Section 02

[Background] Real-World Challenges in Video Quality Assessment

Video Quality Assessment (VQA) is crucial in the digital video era but faces real-world challenges:

  1. High cost of traditional methods: Rely on manually labeled Mean Opinion Score (MOS), requiring tens to hundreds of people to annotate a single video;
  2. Difficulty adapting MLLMs: End-to-end fine-tuning of MLLMs involves billions to trillions of parameters, leading to high computational and time costs;
  3. High demand for labeled data: Effective fine-tuning requires large amounts of MOS data, which is hard to obtain in special fields (e.g., medical imaging);
  4. Hard to transfer between scenarios: Quality features vary greatly across different scenarios like UGC and AIGC, making it impractical to train specialized models for each.
3

Section 03

[Method] Core Design of DPC-VQA: Decoupling Perception and Calibration

The core design of DPC-VQA is based on the insight that "pre-trained MLLMs already provide perceptual priors and need efficient calibration to the target scenario's MOS space", and it is decoupled into two modules:

  • Perception module: Freeze an MLLM (e.g., LLaVA, Qwen-VL) to extract general quality perception features and output basic quality estimates;
  • Calibration module: A lightweight residual branch that predicts correction values (residual learning) for the basic estimates, containing only a small number of trainable parameters (<2% of traditional MLLM methods); End-to-end workflow: Input video → sample key frames → frozen MLLM outputs basic score and features → calibration branch predicts residual → final score = basic score + residual. Only the parameters of the calibration branch are updated during training.
4

Section 04

[Evidence] Experimental Validation: Performance in UGC and AIGC Scenarios

Experimental validation shows that DPC-VQA has excellent performance:

  • UGC scenario: Comparable to fully fine-tuned MLLM methods, but uses <2% trainable parameters and 20% MOS labeled data, with significantly reduced training time;
  • AIGC scenario: Excellent performance, proving cross-scenario transfer ability;
  • Baseline comparison: Significantly outperforms traditional methods (PSNR, SSIM, etc.), and is comparable to end-to-end MLLMs but more efficient, with more obvious advantages in few-shot settings.
5

Section 05

[Highlights] Technical Advantages of DPC-VQA

Technical highlights of DPC-VQA:

  1. Parameter efficiency: Freeze the large model and only train the small calibration branch, resulting in high efficiency in storage, training, and deployment;
  2. Data efficiency: Only 20% MOS labeled data is needed to achieve the performance of traditional methods using 100% data;
  3. Modular design: Decouple perception and calibration, allowing independent upgrades of the perception module, training multiple calibration branches for different scenarios, and sharing the perception base.
6

Section 06

[Applications] Applicable Scenarios of DPC-VQA

Applicable scenarios of DPC-VQA:

  • Video streaming platforms: Assess the quality of uploaded videos to determine compression parameters and recommendation strategies;
  • AI content generation platforms: Automatically assess the quality of generated videos and filter high-quality content;
  • Video conferencing systems: Real-time assessment of call quality and dynamic adjustment of encoding parameters;
  • Video editing tools: Help editors quickly assess the quality of different video versions and optimize post-production workflows.
7

Section 07

[Limitations & Outlook] Current Shortcomings and Future Research Directions

Limitations:

  • Frozen MLLMs inherit their limitations; when insensitive to certain quality issues, the calibration branch is difficult to compensate;
  • Insufficient temporal modeling (e.g., stuttering, jitter);
  • Only outputs a single quality score, not covering multi-dimensional assessment;
  • Real-time performance needs optimization. Future directions:
  • Adaptive calibration (online learning);
  • Zero-shot transfer (no target scenario labels needed);
  • Multi-task learning (joint VQA and other video understanding tasks);
  • Enhanced interpretability (point out specific problem areas and types).
8

Section 08

[Conclusion] Value and Significance of DPC-VQA

DPC-VQA provides an efficient and practical solution for video quality assessment. By decoupling perception and calibration, it reduces training and deployment costs while maintaining high performance. In today's era of AIGC popularity and expanding video scenarios, its efficient adaptation ability is of great significance. For developers and researchers, it proves the effectiveness of the "large model + lightweight adaptation" paradigm in video understanding tasks, opening up new research directions. Paper link: http://arxiv.org/abs/2604.12813v1.