Zing Forum

Reading

SequenBench: A New Benchmark for Evaluating Visual Sorting Capabilities of Multimodal Large Language Models

SequenBench is an evaluation benchmark containing 6761 images and 7261 multiple-choice questions, specifically designed to test the visual sorting capabilities of multimodal large language models.

多模态大语言模型视觉排序基准测试MLLM评测数据集Apache-2.0
Published 2026-04-10 14:54Recent activity 2026-04-10 15:15Estimated read 5 min
SequenBench: A New Benchmark for Evaluating Visual Sorting Capabilities of Multimodal Large Language Models
1

Section 01

[Introduction] SequenBench: A New Benchmark for Evaluating Visual Sorting Capabilities of Multimodal Large Language Models

SequenBench is an evaluation benchmark specifically designed to test the visual sorting capabilities of multimodal large language models (MLLMs), containing 6761 images and 7261 multiple-choice questions. This benchmark aims to fill the gap in evaluating MLLMs' visual sorting capabilities, is open-sourced under the Apache-2.0 license, and provides researchers with a unified evaluation standard and tools.

2

Section 02

Background and Motivation

With the significant progress of MLLMs in tasks such as image understanding and visual question answering, researchers have begun to focus on the fine-grained capability dimension of visual sorting—requiring models to identify objects, understand their relative relationships, and sort them according to specific attributes. SequenBench was created precisely to fill this evaluation gap.

3

Section 03

Dataset Overview

SequenBench contains 6761 images and 7261 multiple-choice questions, making it one of the largest benchmarks for evaluating visual sorting capabilities to date. The dataset is divided into training and test sets in a 7:3 ratio and stored in JSONL format. Each sample includes image (file name), question (sorting-related question), options (four sorting options), and answer (correct answer), covering sorting tasks for various physical quantities such as temperature, length, and thickness.

4

Section 04

Evaluation Dimensions and Task Types

The evaluation tasks of SequenBench have three key features: 1. Coverage of multiple physical quantities (temperature, length, thickness, etc.); 2. Visual reasoning (joint reasoning combining image content and question description); 3. Precise sorting (understanding the relative size relationships of objects).

5

Section 05

Experiments and Evaluation Methods

The project provides complete experimental code support: For open-source models, it supports inference for 10 mainstream models such as DeepSeek-VL, InternVL3.5, and InstructBLIP; for closed-source models, it supports zero-shot/few-shot inference for Gemini-3 Pro and GPT-5. Evaluation metrics include overall accuracy, accuracy for each physical quantity category, precision (P), recall (R), and F1 score.

6

Section 06

Practical Significance and Application Value

The release of SequenBench is of great significance: 1. It is the first systematic evaluation of MLLMs' visual sorting capabilities; 2. It provides researchers with clear optimization directions; 3. It promotes multimodal development (visual sorting is the foundation of complex visual reasoning); 4. It provides a unified evaluation standard and benchmark scores.

7

Section 07

Technical Implementation and Conclusion

The project is open-sourced under the Apache-2.0 license with a clear code structure: Dataset (dataset and division), Images (evaluation images), Code/inference (open-source model inference), Code/close_inference (closed-source model inference), Code/finetune (fine-tuning code), Code/evaluation (evaluation scripts). SequenBench provides a comprehensive evaluation platform for MLLMs' visual sorting capabilities, which will become an important infrastructure to drive model progress and help build more powerful multimodal AI systems.