Zing Forum

Reading

PointVG-R: A Multimodal Large Model Training Framework for Visual Pointing Reasoning

A multimodal visual pointing understanding training system based on reinforcement learning, which realizes the joint optimization of hand detection, pointing ray prediction, and target object localization through PPO/GRPO algorithms, achieving significant improvements in visual grounding tasks.

视觉指向理解多模态大语言模型强化学习PPOGRPO视觉 grounding几何感知推理RayveRLvLLM
Published 2026-04-14 18:13Recent activity 2026-04-14 18:21Estimated read 5 min
PointVG-R: A Multimodal Large Model Training Framework for Visual Pointing Reasoning
1

Section 01

PointVG-R: A Reinforcement Learning Framework for Visual Pointing Reasoning

PointVG-R is a multi-modal large model training framework for visual pointing reasoning, based on reinforcement learning using PPO/GRPO algorithms. It realizes joint optimization of hand detection, pointing ray prediction, and target object localization, achieving significant improvements in visual grounding tasks.

2

Section 02

Background: Challenges in Visual Pointing Understanding

Traditional visual grounding methods simplify tasks to object detection or segmentation, ignoring semantic information of pointing actions (hand pose, direction, spatial geometric relations), leading to poor performance in complex scenarios (multiple candidates, occlusions). While multi-modal large language models (MLLMs) bring new possibilities for visual understanding, effectively integrating visual and text information with structured reasoning remains a core challenge—PointVG-R is proposed to address this gap via reinforcement learning and geometric perception reasoning.

3

Section 03

Core Architecture & Reward Function Design

PointVG-R is built on the veRL framework, supporting multi-modal input (text, image, video). Key components include: 1. Multi-GPU training infrastructure (Ray + veRL/FSDP + vLLM) for distributed computing; 2. RLHFDataset for processing prompt, ground truth (hand bbox, pointing ray, key points, target bbox), and images/videos; 3. Reward function (compute_score) with dimensions: hand_iou (hand bbox overlap), ray_cos (ray direction consistency), kpt_score (key point distance), obj_iou (target overlap, weighted ×5), stage2_format (format compliance, ×2), plus penalties for redundant tool calls and extra bboxes.

4

Section 04

Training Configuration & Hyperparameter Tuning

Training is configured via config.yaml: Data (train/val file paths, max prompt/response length, max pixels); Model (base path, trust remote code, LoRA rank—reducing memory via low-rank adaptation); Inference (sampling times, temperature, top-p); Trainer (total epochs, GPUs per node, save/val frequency).

5

Section 05

Performance & Real-World Applications

PointVG-R achieves a 15.86-point mIoU improvement in pointing-based visual grounding. Applications include: smart home control (point to devices for operations), assistive robot navigation (understand user pointing intent), AR interaction (natural gesture-based info retrieval), and autonomous driving (in-car pointing interaction).

6

Section 06

Implementation Details & Best Practices

Training uses train.sh (replace MODEL_PATH, TRAIN_FILES, VAL_FILES; enable VLLM_USE_V1=1 for better performance). Data format: JSONL with prompt, ground_truth (coordinate info), images. Best practices: tune reward weights for specific scenarios, handle negative samples properly, optimize multi-modal fusion strategies (current is field concatenation).

7

Section 07

Conclusion & Future Outlook

PointVG-R advances visual pointing understanding via RL and geometric perception, providing open-source code and docs. Future directions: continuous pointing tracking in videos, 3D depth integration, handling complex interactions (multi-person pointing, gesture combinations).