Zing Forum

Reading

SGREC: A New Method for Zero-Shot Referring Expression Comprehension Based on Query-Driven Scene Graphs

SGREC constructs a query-driven scene graph as a structured bridge between vision and language, combines the advantages of Visual Language Models (VLM) and Large Language Models (LLM), achieves interpretable zero-shot referring expression comprehension, and delivers leading performance across multiple benchmark tests.

零样本学习指代表达理解场景图视觉语言模型可解释AI多模态融合
Published 2026-03-26 12:05Recent activity 2026-03-27 19:48Estimated read 7 min
SGREC: A New Method for Zero-Shot Referring Expression Comprehension Based on Query-Driven Scene Graphs
1

Section 01

SGREC: A New Method for Zero-Shot Referring Expression Comprehension Based on Query-Driven Scene Graphs (Introduction)

SGREC constructs a query-driven scene graph as a structured bridge between vision and language, combines the advantages of Visual Language Models (VLM) and Large Language Models (LLM), achieves interpretable zero-shot referring expression comprehension, and delivers leading performance across multiple benchmark tests.

2

Section 02

Background and Challenges

Background

Referring Expression Comprehension (REC) is a core task at the intersection of computer vision and natural language processing, aiming to locate specific objects in images based on natural language descriptions. Traditional REC relies on large amounts of annotated data, while zero-shot REC requires models to locate targets via text queries without task-specific training data, making it a research hotspot.

Challenges

Existing VLMs (e.g., CLIP) directly measure the feature similarity between text and image regions, which struggle to capture fine-grained details and complex object relationships; LLMs excel at semantic reasoning but cannot directly abstract visual features into textual semantics, limiting their application in REC.

3

Section 03

Overview of the SGREC Method

The core innovation of SGREC is introducing scene graphs as a structured intermediary between vision and language, combining the visual perception capability of VLM and the semantic reasoning capability of LLM. The overall architecture includes three stages:

  1. Use VLM to construct a query-driven scene graph, encoding spatial relationships, descriptive titles, and object interaction information related to the query;
  2. Bridge low-level image regions and the high-level semantic understanding required by LLM via the scene graph;
  3. LLM infers the target object from the structured text representation of the scene graph and provides decision explanations.
4

Section 04

Construction of Query-Driven Scene Graph

The scene graph is the core component of SGREC, adopting a query-driven strategy:

  • VLM first analyzes the image to identify object instances and their attributes;
  • Filters relevant objects and relationships based on the query to build a compact subgraph, including basic object information (category, position, appearance), spatial relationships (e.g., "left of", "above"), and interaction relationships (e.g., "holding", "looking at");
  • Advantages: Improves computational efficiency, enhances semantic relevance, and reduces the difficulty of LLM reasoning.
5

Section 05

Building the Vision-Language Bridge

SGREC solves the modality gap problem through scene graphs:

  • As a structured semantic representation, the scene graph retains the richness of visual information while being text-readable, enabling seamless connection between VLM and LLM;
  • The modular design makes the system interpretable: Users can view the scene graph to understand the parsing process, and researchers can optimize each link in a targeted manner without retraining end-to-end models.
6

Section 06

LLM Reasoning and Interpretability

  • LLM infers the target object that best matches the query from the scene graph;
  • Generates detailed explanations while outputting the localization result (e.g., "This object is selected because it is the person wearing red clothes on the left side of the table, matching the query description");
  • Significance of interpretability: Enhances user trust, facilitates system debugging and error analysis (locating biases in scene graph construction or LLM reasoning).
7

Section 07

Experimental Results and Performance Analysis

SGREC shows leading performance in multiple zero-shot REC benchmark tests:

  • RefCOCO validation set accuracy: 66.78%;
  • RefCOCO+ test set B accuracy: 53.43%;
  • RefCOCOg validation set accuracy: 73.28%;
  • The advantage is not only reflected in accuracy; interpretability provides additional value for practical deployment (suitable for scenarios with high transparency requirements).
8

Section 08

Technical Significance and Future Outlook

Technical Significance

SGREC provides new ideas for the field of vision-language fusion: combining the advantages of different modality models through structured intermediate representations to solve the interpretability problem; represents a new paradigm of modular, interpretable reasoning chains, which can be extended to tasks such as visual question answering and image caption generation.

Future Outlook

With the improvement of VLM and LLM capabilities, the scene graph method is expected to be applied to complex scenarios such as autonomous driving (understanding object relationships in traffic scenes) and robotics (supporting natural human-machine interaction).