Zing Forum

Reading

Multiview Spatial Relation Invariance Evaluation Tool: Testing the Spatial Reasoning Ability of Vision-Language Models

An evaluation toolset built on ScanNet 3D scenes that systematically assesses the cross-view spatial reasoning consistency of vision-language models (VLMs) by generating image pairs where spatial relations flip due to perspective changes.

视觉语言模型空间推理多视角评测ScanNet3D场景空间关系VLM基准测试视角不变性
Published 2026-04-12 12:15Recent activity 2026-04-12 12:18Estimated read 6 min
Multiview Spatial Relation Invariance Evaluation Tool: Testing the Spatial Reasoning Ability of Vision-Language Models
1

Section 01

Introduction: Overview of the Multiview Spatial Relation Invariance Evaluation Tool

This article introduces the multiview-invariance project—an evaluation toolset built on ScanNet 3D scenes. It systematically assesses the cross-view spatial reasoning consistency of vision-language models (VLMs) by generating image pairs where spatial relations flip due to perspective changes, providing a rigorous benchmark for the 3D spatial reasoning ability of VLMs.

2

Section 02

Background: The Perspective Consistency Problem in VLM Spatial Reasoning

When humans observe 3D scenes, perspective changes do not affect their understanding of spatial relations; however, VLMs trained on 2D images may have their spatial relation judgments flipped due to perspective changes. This project addresses this issue by constructing test cases to evaluate the robustness of VLM spatial reasoning.

3

Section 03

Methodology: Technical Implementation and Dataset Construction of the Evaluation Tool

Technical Workflow

  1. Scene Data Acquisition: Download ScanNet scene data (reconstructed meshes, semantic labels, etc.) from Hugging Face;
  2. Scene Preprocessing: Axis alignment to ensure the ground is horizontal, filtering structural elements and small objects;
  3. Object Pairing and Perspective Generation: Enumerate eligible object pairs and find camera positions that flip spatial relations (satisfying constraints like distance, projection, occlusion, etc.);
  4. Reference Arrow Mechanism: Optional colored arrows pointing to the midpoints of objects as spatial anchors to test the impact of reference frames on VLM judgments.

Spatial Relation Definitions

  • Left/Right: The difference in projection centers on the image plane exceeds 20 pixels;
  • Front/Back: The spatial depth difference from the camera exceeds 0.1 meters;
  • Up/Down: Both the centroid and the bottom of the bounding box are 0.1 meters higher.

Dataset Construction

Generate rendered images (with target objects highlighted), metadata JSON files, and optional arrow-perspective images; divide into training/test sets by scene to avoid information leakage.

4

Section 04

Tool Application: API Integration and Engineering Highlights

API Integration

Built-in OpenAI API support, enabling batch evaluation of models like GPT-4V via chatgpt_api.py and run_chatgpt_benchmark.py, with support for custom prompts and question templates.

Engineering Highlights

  • Cross-Platform Compatibility: Switch to PyVista rendering to support Windows;
  • Occlusion Detection: Use ray casting to determine object visibility;
  • Parameterized Configuration: Rich command-line options to adjust camera parameters, etc.
5

Section 05

Research Significance: Value and Prospects of VLM Spatial Reasoning Evaluation

  1. Controllable Test Environment: Precise geometric configurations and expected answers avoid the subjectivity of manual annotation;
  2. Perspective Invariance Metrics: Test whether VLMs truly understand 3D space (rather than pixel matching);
  3. Reference Arrow Experiments: Quantify the improvement in reasoning consistency from spatial anchors;
  4. Application Prospects: Play an important role in embodied intelligence, robot navigation, AR/VR, and other scenarios.
6

Section 06

Conclusion: Filling the Gap in VLM Spatial Reasoning Evaluation Tools

The multiview-invariance project fills the gap in tools for VLM spatial reasoning evaluation. By combining 3D scene geometry with 2D VLM evaluation, it provides a rigorous and reproducible testing platform to facilitate research and application development in related fields.