Zing Forum

Reading

VISAGE: Suppressing Hallucinations in Multimodal Large Models via Visual Attention Mechanism

This article introduces VISAGE, a training-free decoding framework for multimodal diffusion large language models. It effectively mitigates multimodal hallucinations by quantifying the spatial entropy of cross-attention distributions to penalize token choices lacking visual grounding.

多模态大模型幻觉问题视觉注意力扩散模型交叉注意力空间熵视觉 grounding
Published 2026-03-27 01:53Recent activity 2026-03-27 14:25Estimated read 6 min
VISAGE: Suppressing Hallucinations in Multimodal Large Models via Visual Attention Mechanism
1

Section 01

Introduction: VISAGE Framework Suppresses Hallucinations in Multimodal Large Models

This article introduces VISAGE, a training-free decoding framework for multimodal diffusion large language models. It effectively mitigates multimodal hallucinations by quantifying the spatial entropy of cross-attention distributions to penalize token choices lacking visual grounding. Addressing the flaw of traditional models' objective mismatch (only considering text likelihood while ignoring visual support), this framework calibrates the objective function during inference to enhance the model's fidelity to visual content.

2

Section 02

Background: The Essence of Multimodal Hallucinations—Objective Mismatch

Traditional multimodal generation models have an objective mismatch problem during decoding: the decoder selects tokens based solely on text likelihood, ignoring visual support, leading to language probability becoming an incorrect surrogate objective. This causes hallucinations: generated text is grammatically and semantically reasonable but irrelevant to the image (e.g., fabricating non-existent things). The study reinterprets hallucinations as local optimization errors—each step's decision relies only on language probability, accumulating into global hallucinations.

3

Section 03

Method: Core Ideas of the VISAGE Framework

VISAGE (Visual Attention for Grounded Estimation) centers on quantifying the degree of visual grounding via the spatial characteristics of cross-attention:

  1. Spatial Entropy Metric: Concentrated attention (low entropy) indicates reliance on visual evidence, while uniform distribution (high entropy) indicates reliance on language priors (hallucination risk);
  2. Localization Consensus: Requires multiple attention heads to point to similar regions, prioritizing tokens with strong visual grounding;
  3. Inference-time Intervention: No training needed—directly reorders tokens during inference to ensure generation is faithful to visual content.
4

Section 04

Technical Details: Key Implementation Points of VISAGE

VISAGE implementation involves:

  1. Attention Distribution Extraction: Extract weights from cross-attention layers between the visual encoder and language decoder;
  2. Entropy Calculation and Normalization: Use an entropy definition suitable for image grids, normalize attention across different layers to ensure comparability;
  3. Dynamic Threshold Adjustment: Adjust thresholds based on tasks and model behavior to balance hallucination suppression and generation fluency.
5

Section 05

Evidence: Experimental Evaluation Results

VISAGE performs excellently in multiple benchmark tests:

  • HallusionBench: A benchmark specifically for hallucination evaluation, achieving a 7.75% relative improvement;
  • MMMU: A multidisciplinary multimodal understanding benchmark, achieving an 8.59% relative performance improvement on the validation set;
  • Comparative Advantages: Training-free, low computational overhead, high generality (applicable to any Transformer-based multimodal model).
6

Section 06

Conclusion: Value and Contributions of VISAGE

By addressing the objective mismatch problem in multimodal hallucinations, VISAGE proposes an elegant solution: using the spatial characteristics of cross-attention to effectively suppress hallucinations without modifying model parameters, providing a tool for the reliable deployment of multimodal large models. This work not only offers practical technology but also deepens the understanding of multimodal generation mechanisms.

7

Section 07

Limitations and Future Directions

Limitations:

  1. Relies on the quality of attention weights; if the model's attention itself is problematic, the effect is limited;
  2. Analyzing attention during inference introduces certain computational overhead (especially for high-resolution images);
  3. Mainly targeted at Transformer diffusion models; applicability to other architectures needs verification.

Future Directions: Develop more efficient attention analysis methods, extend to modalities like video, and combine training methods to improve visual grounding quality.