# MMIR: A New Benchmark for Consistency Reasoning Capabilities of Multimodal Large Models

> The UCSC research team released the MMIR benchmark, which specifically evaluates the ability of multimodal large language models to detect image-text inconsistencies. It covers five reasoning-intensive inconsistency types and reveals significant shortcomings of current models in complex multimodal reasoning.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-12T08:00:57.000Z
- 最近活动: 2026-05-12T08:51:20.602Z
- 热度: 150.2
- 关键词: 多模态大语言模型, MMIR基准测试, 不一致性推理, 图文理解, ACL 2025, 模型评估, 视觉问答, 事实核查
- 页面链接: https://www.zingnex.cn/en/forum/thread/mmir
- Canonical: https://www.zingnex.cn/forum/thread/mmir
- Markdown 来源: floors_fallback

---

## [Introduction] MMIR Benchmark: A New Evaluation Tool for Inconsistency Reasoning Capabilities of Multimodal Large Models

The UCSC research team released the MMIR (Multimodal Inconsistency Reasoning) benchmark, the first systematic framework dedicated to evaluating the reasoning ability of multimodal large language models (MLLMs) to detect image-text inconsistencies. This benchmark covers five reasoning-intensive inconsistency types, reveals significant shortcomings of current mainstream models in complex multimodal reasoning, and marks an important shift in multimodal model evaluation from 'being able to understand' to 'being able to judge'.

## Research Background: Key Gap in Inconsistency Reasoning Evaluation of Multimodal Models

With the rapid development of MLLMs in tasks like image-text understanding and visual question answering, a core question emerges: Do models have deep reasoning capabilities to identify subtle inconsistencies between images and text? Current mainstream evaluation benchmarks mostly focus on correct image-text understanding and generation, but rarely touch on inconsistency detection and reasoning—which is crucial in real-world scenarios such as news verification, social media moderation, and legal document review.

## MMIR Benchmark Design and Data Filtering Process

The MMIR benchmark contains 534 test samples, covering five inconsistency types: factual contradiction, identity misattribution, context mismatch, quantity difference, and spatiotemporal incoherence. To ensure sample quality, the team adopted a four-stage filtering process: initial collection of multi-source candidate image-text pairs → manual annotation and classification of inconsistency types → multiple rounds of cross-validation and expert review to eliminate low-quality samples → grading based on reasoning complexity and depth of background knowledge.

## Performance of Mainstream Models on MMIR

Evaluation results show significant limitations of current models: In open-ended questions, the optimal o1 model achieves an overall accuracy of only 51.40% (as low as 38.73% for poster categories), while GPT-4o reaches 33.14%; open-source models are even weaker, with Qwen2.5-VL-7B (17.60%), LLaVA-NeXT-7B (14.70%), and InternVL2.5-8B (14.23%). Model performance slightly improves in multiple-choice settings (o1 reaches 52.15%, GPT-4o reaches 47.75%), but still falls far short of practical application requirements.

## Technical Challenges and Future Research Directions

MMIR reveals key challenges: 1. Insufficient fine-grained visual understanding, making it hard to capture image details; 2. Cross-modal alignment bias, with imprecise matching between visual features and linguistic semantics; 3. Broken reasoning chains, poor maintenance of consistency in long logical chains; 4. Domain knowledge dependency, insufficient specific background knowledge affecting judgment. Continuous breakthroughs are needed in these directions in the future.

## Industry Application Insights: Capability Boundaries and Improvement Paths

The industry needs to recognize the current capability boundaries of models and retain manual review in key scenarios like content moderation and fact-checking. Meanwhile, targeted fine-tuning using the MMIR dataset can improve models' inconsistency detection capabilities—this dataset has been open-sourced for community exploration.

## Summary and Outlook: A New Milestone in Multimodal Evaluation

The MMIR benchmark provides a standardized evaluation tool for the inconsistency reasoning capabilities of MLLMs. Current model performance indicates this field remains challenging; future breakthroughs are needed in fine-grained visual understanding, precise cross-modal alignment, and long-chain logical reasoning to achieve reliable multimodal intelligent systems.
