Zing Forum

Reading

LaMI: Enhancing Visual Reasoning Capabilities of Large Language Models via Late Multi-Image Fusion

LaMI proposes an innovative late multi-image fusion method that enables text-only trained large language models to gain strong visual reasoning capabilities without expensive multimodal training. It outperforms traditional enhancement methods on visual commonsense tasks while maintaining or even improving performance on text tasks.

LaMI多模态融合大语言模型视觉推理晚期融合多图像ACL 2026PyTorchLLaMA 3视觉语言模型
Published 2026-04-08 23:11Recent activity 2026-04-08 23:19Estimated read 5 min
LaMI: Enhancing Visual Reasoning Capabilities of Large Language Models via Late Multi-Image Fusion
1

Section 01

LaMI: Enhancing LLM Visual Reasoning via Late Multi-Image Fusion (Main Floor)

LaMI proposes a late multi-image fusion method that allows text-only trained large language models to acquire strong visual reasoning capabilities without expensive multimodal training. This method outperforms traditional enhancement approaches on visual commonsense tasks while maintaining or even improving performance on text tasks, providing a new path for the development of multimodal AI.

2

Section 02

Background: Visual Dilemma of LLMs and Limitations of Existing Solutions

Large Language Models (LLMs) excel at text reasoning but lack visual grounding. While Vision-Language Models (VLMs) solve some of these issues, they underperform LLMs on pure text tasks, and adapting to new LLMs requires expensive multimodal training. Existing test-time enhancement methods mostly use early fusion and only single images, which have limitations such as incomplete information coverage and interference with text reasoning.

3

Section 03

Core Innovations of LaMI: Late Multi-Image Fusion and Technical Architecture

LaMI has two core innovations: 1. Multi-image parallel sampling: Generate multiple related images from different angles/scenarios based on text prompts to provide comprehensive visual context; 2. Late fusion layer: Fuse visual features before the model's last layer, combining the prediction probabilities of multiple images with those of the text-only LLM. The technical implementation is based on PyTorch and supports models like GPT-2, Gemma 2B, and LLaMA3. Training consists of multimodal pre-training (Wikipedia-103 + LAION-220) and task-specific evaluation phases; during inference, multiple images (usually k=10) are generated, their features extracted via a visual encoder, and then fused with text features.

4

Section 04

Experimental Results: Dual Improvement in Visual and Text Tasks

Experimental results show: LaMI significantly outperforms other test-time enhancement methods on visual commonsense reasoning tasks, approaching specially trained VLMs; when applied to LLaMA3, it not only improves visual task performance but also unexpectedly enhances pure text NLP task performance; the computational overhead during testing is moderate, and the increase in latency is acceptable.

5

Section 05

Application Prospects: New Path and Insights for Multimodal AI

LaMI opens a new path for multimodal AI: it enables existing strong text LLMs to gain visual capabilities without retraining, facilitating rapid iteration and reducing deployment costs. Its late fusion concept may inspire research on fusion of other modalities such as audio and video. We look forward to more extended work and practical application implementations at ACL 2026.

6

Section 06

Conclusion and Resources: Research Value of LaMI and Support for Reproduction

LaMI represents an important direction in multimodal large model research—balancing text capabilities and multimodal integration, where the timing and method of fusion are key. The official repository provides complete PyTorch implementation, pre-trained model downloads, and evaluation scripts, lowering the threshold for reproduction and further research.