Zing Forum

Reading

VLA Data Forge: A Framework for Building Embodied Reasoning Datasets for Vision-Language-Action Models

A research-grade Python framework for curating and preprocessing datasets for VLA model training, focusing on reasoning-aware embodied datasets. It supports Embodied-CoT and Bridge v2 datasets and provides multi-backend VLM reasoning trajectory generation capabilities.

VLAVision-Language-Actionroboticsembodied AIdataset curationreasoningGeminiGPT-4oQwen-VLBridge v2
Published 2026-04-16 03:12Recent activity 2026-04-16 03:21Estimated read 7 min
VLA Data Forge: A Framework for Building Embodied Reasoning Datasets for Vision-Language-Action Models
1

Section 01

Core Introduction to VLA Data Forge

VLA Data Forge is a research-grade Python framework specifically designed for curating and preprocessing reasoning-aware embodied datasets for Vision-Language-Action (VLA) model training. It supports the Embodied-CoT and Bridge v2 datasets, provides multi-backend (Gemini, GPT-4o, Qwen-VL) VLM reasoning trajectory generation capabilities, and bridges the gap between raw robot demonstration data and VLA models that require explicit reasoning abilities.

2

Section 02

Background and Problem

With the rapid development of VLA models in the robotics field, high-quality, structured, and reasoning-capable training data has become a key bottleneck. Traditional robot demonstration data often lacks explicit reasoning process annotations, limiting the model's generalization ability in complex tasks. VLA Data Forge is designed to address this problem.

3

Section 03

Technical Architecture and Core Components

VLA Data Forge adopts a modular architecture with core components including:

  1. Data Schema Layer: Defines standardized data types such as RobotAction, ReasoningTrace, ECoTEpisode/BridgeEpisode, InterleavedEpisode;
  2. Dataset Readers: ECoTDatasetReader (loads Embodied-CoT from HuggingFace), BridgeV2DatasetReader (supports TFDS/HDF5/RLDS formats);
  3. Model Backends: Google Gemini, OpenAI GPT-4o, Qwen-VL (supports API or local inference);
  4. Reasoning Trajectory Generation Pipeline: PromptBuilder, ReasoningTraceParser, TracePostprocessor, GenerationPipeline (supports resume from breakpoints);
  5. Data Curation Pipeline: EpisodeInterleaver, DatasetValidator, multi-format exporter;
  6. Visualization Tools: FrameViewer (frame grid, reasoning overlay, GIF generation), TrajectoryViewer (action plotting, coverage heatmap).
4

Section 04

Reasoning Trajectory Alignment Strategies and Output Format

The framework supports three reasoning trajectory alignment strategies:

  • exact: Only steps directly annotated by VLM get reasoning;
  • nearest: Propagate from the nearest annotated step (default);
  • broadcast: Copy a single segment-level trajectory to all steps. Alignment confidence scores (1.0=direct, 0.7=propagated) help downstream models judge reliability. The curated dataset is output in JSONL format, each line containing schema_version, episode_id, task_description, alignment_metadata, provenance, and steps (including action, observation, reasoning, etc.).
5

Section 05

Quick Start and Usage Examples

Installation:

  1. Create a conda environment: conda create -n vla-forge python=3.11 -y and activate it;
  2. Clone the repository: git clone https://github.com/akira398/vla-data-forge;
  3. Install dependencies: pip install -e \".[viz]\", optionally install model backends (e.g., Gemini requires GOOGLE_API_KEY). Usage Examples:
  • Visualize Embodied-CoT: python scripts/visualize_ecot.py --max-episodes 3;
  • Generate reasoning trajectories: python scripts/generate_traces.py --max-episodes 10 (Gemini by default, can specify GPT-4o/Qwen-VL);
  • Curate interleaved dataset: python scripts/curate_interleaved.py --max-episodes 100 --alignment nearest;
  • Validate output: python scripts/validate_dataset.py outputs/curated/episodes.jsonl.
6

Section 06

Application Scenarios

VLA Data Forge is suitable for the following scenarios:

  1. VLA model training data preparation: Generate high-quality training data for VLA models requiring explicit reasoning such as OpenVLA, π0;
  2. Robotics learning research: Explore the impact of reasoning trajectories on policy learning and the effects of different alignment strategies;
  3. Multimodal learning: Build vision, language, action multimodal datasets to support cross-modal research;
  4. Data augmentation: Expand existing robot demonstration datasets by generating reasoning trajectories via VLM.
7

Section 07

Summary and Value

VLA Data Forge is a future-oriented embodied intelligence data infrastructure. Through systematic reasoning trajectory generation and data integration, it provides high-quality data support for VLA model training. Its modular architecture, multi-backend support, and extensible design (e.g., adding new modality extractors, model backends) can adapt to the rapid development needs of the robotics learning field. For researchers and developers in VLA model research, robotics learning, or multimodal intelligence, it is a tool worth paying attention to and using.