Zing Forum

Reading

AEGIS Benchmark: A New Evaluation Framework for Forensic Analysis of AI-Generated Academic Images

The AEGIS Benchmark systematically evaluates the academic image forensics capabilities of 25 multimodal large language models (MLLMs) and 9 expert models through three key innovations: domain-specific complexity, diverse forgery simulation, and multi-dimensional forensic assessment. It reveals that current forensic technologies are significantly lagging behind the development of generative AI.

AI生成图像学术诚信图像取证多模态大语言模型基准测试生成式AI安全
Published 2026-05-01 01:56Recent activity 2026-05-01 11:22Estimated read 5 min
AEGIS Benchmark: A New Evaluation Framework for Forensic Analysis of AI-Generated Academic Images
1

Section 01

[Introduction] AEGIS Benchmark: A New Evaluation Framework for Forensic Analysis of AI-Generated Academic Images

The rapid development of generative AI technology has triggered an academic image integrity crisis. Researchers have launched the AEGIS Benchmark, which systematically evaluates the academic image forensics capabilities of 25 multimodal large language models (MLLMs) and 9 expert models through three key innovations: domain-specific complexity, diverse forgery simulation, and multi-dimensional forensic assessment. It reveals that current forensic technologies are significantly lagging behind the development of generative AI.

2

Section 02

Severe Reality of Academic Image Fraud and Limitations of Existing Benchmarks

Academic image fraud has exponentially worsened due to generative AI, and traditional detection tools struggle to identify highly realistic AI-generated images. Existing forensic benchmarks have limitations such as covering only a single type of image and failing to simulate diverse forgery strategies in real academic scenarios, leading to poor actual performance of models.

3

Section 03

Three Core Innovations of AEGIS

Domain-Specific Complexity

Covers 7 major academic fields and 39 subcategories, reflecting image characteristics and fraud patterns across different disciplines. The top model GPT-5.1 achieved an overall performance of only 48.80%, while professional models had a localization accuracy of 30.09%.

Diverse Forgery Simulation

Simulates 4 mainstream forgery strategies such as direct generation of fake images and local tampering of real images. Among the 25 generative models tested, 11 had an average forensic accuracy of less than 50% for forged images, forming a 'forensic gap'.

Multi-Dimensional Forensic Assessment

Introduces a multi-dimensional framework including detection capability, reasoning process, and localization precision. MLLMs achieved an accuracy of 84.74% in identifying text artifacts, while professional detectors had a peak binary classification accuracy of 79.54%.

4

Section 04

Key Findings from AEGIS Evaluation

  1. MLLMs, leveraging their natural language understanding advantages, perform well in identifying text artifacts such as text annotations in academic images;
  2. Professional forensic models still have advantages in pure visual analysis (statistical anomalies, pixel artifacts), but have poor adaptability to new generative architectures;
  3. No single model can cover all evaluation dimensions; multiple detection methods need to be integrated.
5

Section 05

Implications and Recommendations for Academic Integrity

  • Academic journals/conferences: Should not rely entirely on automated tools; need to strengthen manual verification of abnormal results;
  • Researchers: Stay vigilant, verify images of unknown origin, and promote the sharing of raw data and code;
  • Technicians: Integrate multimodal capabilities to improve adaptability to new generative models and localization precision.
6

Section 06

Significance and Future Outlook of AEGIS

AEGIS marks a new stage in academic image forensics research. It is not only an evaluation tool but also reflects the gap between AI security technology and ideal goals. It will continue to be updated in the future to provide a cutting-edge defense line for the academic community against AI fraud.