Zing Forum

Reading

DeepScan: Enhance Visual Reasoning Capabilities of Large Vision-Language Models Without Training

DeepScan is a training-free framework that significantly improves the performance of large vision-language models (LVLMs) on fine-grained visual reasoning tasks through three stages: hierarchical scanning, refocusing, and evidence-enhanced reasoning.

DeepScan视觉语言模型视觉推理训练无关细粒度理解LVLM计算机视觉多模态AI
Published 2026-04-09 11:41Recent activity 2026-04-09 11:46Estimated read 7 min
DeepScan: Enhance Visual Reasoning Capabilities of Large Vision-Language Models Without Training
1

Section 01

DeepScan: Guide to the Training-Free Visual Reasoning Enhancement Framework for Large Vision-Language Models

DeepScan is a training-free framework designed to enhance the performance of large vision-language models (LVLMs) on fine-grained visual reasoning tasks. It simulates the human bottom-up reasoning process through three core stages: hierarchical scanning, refocusing, and evidence-enhanced reasoning. Experiments show that this framework can significantly improve model performance—for example, in the V* benchmark, using Qwen2.5-VL-7B as the backbone model achieves an overall accuracy of 90.6%, an increase of 16.3% compared to the original model.

2

Section 02

Bottlenecks in Visual Reasoning and DeepScan's Design Intuition

Large vision-language models (LVLMs) perform well in image understanding and question-answering tasks, but they have limitations when facing complex reasoning tasks that require fine-grained visual localization. The traditional coarse-to-fine single localization strategy is fragile and error-prone in complex scenarios. Humans usually solve visual problems in a bottom-up way: identify local clues → recover complete evidence → reason based on evidence. DeepScan is a framework built exactly on this intuition.

3

Section 03

Detailed Explanation of DeepScan's Core Three-Stage Architecture

DeepScan consists of three tightly coupled stages:

Hierarchical Scanning

Divide the image into patches, generate patch-level attention maps, extract clue regions, recover evidence via point-prompt segmentation, and filter candidates.

Refocusing

Perform zoom-in/zoom-out operations on the fused evidence cropped images, select the minimal view containing key evidence, and remove irrelevant interference.

Evidence-Enhanced Reasoning

Construct a hybrid evidence memory (fine-grained evidence cropped images + coarse-grained refined views), organize them into multi-image prompts input to the LVLM, and generate accurate answers based on visual evidence.

4

Section 04

Training-Free Nature and Multi-Expert Collaboration Advantages of DeepScan

Training-Free Feature

DeepScan is a plug-and-play training-free framework that can be integrated into different LVLM backbone networks without additional adaptation costs, offering high practical value and deployment flexibility.

Multi-Expert Collaboration

Enhance LVLM capabilities through two pluggable experts:

  • Search Expert: Uses BLIP-ITM to generate patch-level Grad-CAM attention maps for local clue exploration;
  • Visual Expert: Provides point-prompt segmentation and text-conditioned detection. The official implementation uses a combination of LangSAM and SAM2.
5

Section 05

Experimental Results and Performance of DeepScan

DeepScan performs excellently in multiple fine-grained visual reasoning benchmarks:

V* Benchmark

When using Qwen2.5-VL-7B:

  • Overall accuracy: 90.6%;
  • Attribute recognition: 93.0%, spatial relationships: 86.8%;
  • Improvements over the original model: V* benchmark +16.3%, TreeBench +5.5%.

High-Resolution Benchmarks

  • HR-Bench-4K: 75.0%;
  • HR-Bench-8K: 72.4%.

Scale Expansion

DeepScan-72B achieves an accuracy of 94.2% on the V* benchmark (k=∞), demonstrating good scalability.

6

Section 06

Deployment Architecture and Supported Models of DeepScan

Deployment Services

DeepScan uses a service-oriented pipeline architecture, which requires starting the following services:

  1. Search Expert Service (BLIP-ITM + Grad-CAM);
  2. Visual Expert Service (LangSAM detection);
  3. SAM2 Segmentation Service;
  4. LVLM Service (supports backends like LLaVA, Qwen, etc.).

Supported Models

The official implementation supports: LLaVA-1.5-7B, Qwen2-VL-7B, Qwen2.5-VL-7B/32B/72B.

7

Section 07

Practical Significance and Application Scenarios of DeepScan

DeepScan provides an important technical breakthrough in the field of visual reasoning, proving that through sophisticated pipeline design and simulating human cognitive processes, model performance can be significantly improved without increasing training costs.

Its application scenarios include:

  • Visual question-answering systems requiring precise localization and fine-grained understanding;
  • Document analysis and chart understanding;
  • Medical image analysis;
  • Scene understanding in autonomous driving.
8

Section 08

Summary and Domain Contributions of DeepScan

DeepScan represents an important advancement in the field of visual reasoning. By imitating the human bottom-up reasoning process, it pushes training-free methods to a new performance level. Its modular design, multi-expert collaboration architecture, and excellent experimental results make it a powerful tool for enhancing large vision-language models.