Zing Forum

Reading

PPV-CPT: Cultivating Perception-Prediction-Verification Capabilities for Multimodal Agents During Continual Pre-Training

PPV-CPT is an innovative framework that introduces the Perception-Prediction-Verification (PPV) loop during the continual pre-training phase, enabling vision-language models (VLMs) to acquire agent visual reasoning capabilities before task-specific fine-tuning, thus addressing the disconnection between perception and action in traditional methods.

VLMcontinual pre-trainingagentmultimodalperceptionpredictionverificationQwenLLaVA
Published 2026-04-01 16:11Recent activity 2026-04-01 16:24Estimated read 6 min
PPV-CPT: Cultivating Perception-Prediction-Verification Capabilities for Multimodal Agents During Continual Pre-Training
1

Section 01

PPV-CPT Framework Guide: Cultivating Core Capabilities of Multimodal Agents During Pre-Training

PPV-CPT (Perceive-Predict-Verify Continual Pre-Training) is an innovative framework that introduces the Perception-Prediction-Verification (PPV) loop during the continual pre-training phase, enabling vision-language models (VLMs) to acquire agent visual reasoning capabilities before task-specific fine-tuning, thus addressing the disconnection between perception and action in traditional training. Its core idea is to forge agent visual reasoning as a foundational capability during pre-training, providing a stronger starting point for subsequent SFT/RL.

2

Section 02

Background: Pain Points of Traditional VLM Agent Training

The current VLM training paradigm first trains on static image-text pairs and then fine-tunes for agent tasks, leading to models that cannot actively use visual decision-making (they can describe but cannot decide where to look, predict outcomes, or verify understanding). Existing construction methods rely on SFT or RL, forcing models to learn agent capabilities and task alignment simultaneously, creating optimization tension and low efficiency.

3

Section 03

Core Methods: PPV Loop and Two-Stage Continual Pre-Training

PPV Loop includes three key capabilities: 1. Perception (active visual attention, goal-oriented selection of focus areas); 2. Prediction (visual state transition prediction, building a world model to support planning); 3. Verification (self-correction, comparing expected vs. actual differences and generating correction strategies), forming a positive cycle.

Training Process is divided into two stages: The first stage (32K context, 200B tokens) cultivates basic perception + prediction, with data including APC (40%), VSTP (40%), HVC (10%), and general VL (10%); The second stage (128K context, 100B tokens) strengthens the complete PPV loop and self-correction, with data adjusted to APC (20%), VSTP (20%), HVC (40%), and general VL (20%), and 30% of assumptions are set as errors to train correction.

4

Section 04

Data Synthesis and Model Support: Scalability and Compatibility

Data Synthesis: All training data is synthesized via VLM annotators (no manual work required), with sources including Playwright browser automation, GUI simulators, instructional video frames, and synthetic HTML rendering, capable of generating over 300B tokens of data.

Model Support: Compatible with mainstream VLM architectures, such as Qwen2-VL (7B/72B default), InternVL2 (8B/40B/76B multilingual), LLaVA-OneVision (7B/72B community standard), and provides complete training scripts and configurations (Accelerate+DeepSpeed distributed training).

5

Section 05

Evaluation System: Comprehensive Measurement of Capabilities and Downstream Performance

PPV-CPT establishes a comprehensive evaluation framework: Intrinsic PPV Evaluation (perception quality: region relevance, etc.; prediction quality: state transition accuracy, etc.; verification quality: error detection rate, etc.); Downstream Agent Benchmarks (GUI/Web agents: Mind2Web, etc.; deep research: BrowseComp, etc.; visual reasoning: VSR, etc.; general VLMs: VQAv2, etc.).

6

Section 06

Key Experimental Questions and Significance of Paradigm Shift

PPV-CPT is designed to answer research questions: 1. Is continual pre-training for agents effective? 2. Which PPV capability contributes the most? 3. Are the three capabilities synergistic? 4. Does progressive long-context training help? 5. How does data scale affect capabilities?

Its significance lies in transforming the VLM agent training paradigm: decoupling capability and task alignment optimization, improving data efficiency, and enhancing generalization (transfer of general PPV capabilities to multiple tasks).

7

Section 07

Summary: Future Potential of PPV-CPT

PPV-CPT cultivates the visual reasoning capabilities of VLM agents during pre-training via the PPV loop, filling the gap in traditional processes. As multimodal agents are increasingly applied in fields like GUI automation and robotics, such foundational capability cultivation frameworks will play an important role.