Zing Forum

Reading

SAMA: A New Framework for Large Language Models in Multi-turn Referential Video Dialogue

A multimodal video understanding framework accepted by NeurIPS 2025, which unifies fine-grained video understanding and precise referential localization by combining a spatiotemporal context aggregator with a segmentation model.

视频理解大语言模型多模态AI视频定位NeurIPS 2025SAM开源项目计算机视觉
Published 2026-03-28 16:39Recent activity 2026-03-28 16:53Estimated read 5 min
SAMA: A New Framework for Large Language Models in Multi-turn Referential Video Dialogue
1

Section 01

[Introduction] SAMA: A New Framework for Large Language Models in Multi-turn Referential Video Dialogue

SAMA is a large language model framework for multi-turn referential video dialogue accepted by NeurIPS 2025, aiming to address the core challenge of unifying spatiotemporal semantic understanding and precise referential localization in video comprehension. The project forms a complete technical system by building high-quality datasets, innovative model architectures, and comprehensive evaluation benchmarks, significantly enhancing the fine-grained spatiotemporal understanding capabilities of video large language models. The code will be open-sourced soon.

2

Section 02

Research Background and Problem Definition

Current Video Large Language Models (Video LMMs) have limitations in fine-grained spatiotemporal understanding. The core challenges include two dimensions: video referential understanding (semantic information) and video localization (object region segmentation). Existing methods mostly handle these two tasks separately, limiting the development of unified interaction capabilities. The domain bottlenecks are the lack of high-quality unified video instruction data and benchmark tests for comprehensively evaluating multi-turn spatiotemporal referential dialogue capabilities.

3

Section 03

Core Methods and Technical Implementation of SAMA

The core contributions and technical implementation of SAMA are as follows:

  1. SAMA-239K Dataset: Contains 239,000 samples and 15,000 videos, supporting joint learning of referential understanding, localization, and multi-turn dialogue tasks;
  2. Model Architecture: Integrates a spatiotemporal context aggregator (capturing spatiotemporal dependencies) with the Segment Anything Model (SAM, for precise pixel localization) to achieve synergistic enhancement of semantic understanding and localization;
  3. SAMA-Bench Benchmark: 5067 questions and 522 videos, evaluating the context consistency and accuracy of multi-turn dialogues;
  4. Technical Details: Based on PyTorch, provides 1B/4B/8B multi-scale model variants, integrates public datasets such as LVVIS, and open-sources annotation files.
4

Section 04

Experimental Results and Performance

Experimental results show:

  • Significantly outperforms existing video large language models on the SAMA-Bench benchmark;
  • Achieves state-of-the-art (SOTA) performance on general video localization benchmarks with strong generalization capabilities;
  • Maintains competitiveness in standard visual understanding tasks without sacrificing basic capabilities.
5

Section 05

Application Scenarios and Future Outlook

Application scenarios include intelligent video surveillance (object tracking), video creation (instruction-based editing), education (conversational understanding), and visual impairment assistance (content description). Future directions: Expand to 3D/panoramic video modalities, improve real-time processing capabilities, integrate with robot vision systems; the complete code will be open-sourced soon.

6

Section 06

Conclusion: Significance and Impact of SAMA

The SAMA project unifies video referential understanding and localization capabilities, advances research on multi-modal video understanding, and provides technical support for industrial applications. After open-sourcing, it will promote community innovation, spawn more video AI application scenarios, and have a profound impact on the development of the field.