Zing Forum

Reading

OST: A New Framework for Data Selection in Multimodal Models Based on Incremental Optimization Utility

This article introduces the One-Step-Train (OST) framework, which redefines data selection as an incremental optimization utility ranking problem. By simulating a single-step update on a lightweight proxy model to estimate the marginal utility of each sample, OST reduces training costs by 43% while outperforming the LLM-as-a-Judge baseline by 1.8 points.

数据选择多模态模型增量优化合成数据LLM-as-a-Judge训练效率边际效用
Published 2026-05-08 17:28Recent activity 2026-05-11 11:26Estimated read 6 min
OST: A New Framework for Data Selection in Multimodal Models Based on Incremental Optimization Utility
1

Section 01

Introduction: OST Framework—An Optimized New Solution for Data Selection in Multimodal Models

This article introduces the One-Step-Train (OST) framework, which redefines data selection as an incremental optimization utility ranking problem. By simulating a single-step update on a lightweight proxy model to estimate the marginal utility of each sample, OST reduces training costs by 43% while outperforming the LLM-as-a-Judge baseline by 1.8 points, providing an efficient and interpretable new solution for multimodal model training.

2

Section 02

Background: Dilemmas of Synthetic Data and Limitations of Existing Methods

Dilemmas of Synthetic Data

Large-scale multimodal models (LMMs) rely on high-quality training data, but synthetic data contains noise and low-quality samples, which wastes resources and may lead to performance degradation. Traditional heuristic rules or manual screening are costly and struggle to capture deep value; the LLM-as-a-Judge method has extremely high computational costs and lacks interpretability.

3

Section 03

Methodology: Core Ideas and Technical Implementation of the OST Framework

Core Idea: Data Value from an Optimization Perspective

OST treats data selection as an incremental optimization utility ranking problem, directly estimating the actual contribution of each sample to model training. Its advantages include direct optimization of objectives, high computational efficiency, and strong interpretability.

Technical Implementation: Single-Step Simulation and Utility Estimation

  1. Proxy Model Selection: Use a lightweight model to reflect the basic impact of data on training;
  2. Single-Step Update Simulation: Perform a single-step gradient update for each sample and measure performance changes to obtain marginal utility;
  3. Utility Ranking and Selection: Rank samples by marginal utility and select the top subset for training.
4

Section 04

Experimental Evidence: Performance and Efficiency Advantages of the OST Framework

Experimental Results: Pareto-Optimal Efficiency

  • Cost and Performance Optimization: Selecting the top 50% of data reduces training costs by 43% and time by 17%, while outperforming the LLM-as-a-Judge baseline by 1.8 points;
  • Fixed Budget Performance: With the top 20% of data, it outperforms the LLM-as-a-Judge baseline by 5.6 points, as well as the DEITA and Full-SFT baselines;
  • Noise Identification: Effectively eliminates negative transfer from toxic samples, especially suitable for complex reasoning tasks.
5

Section 05

Conclusion: Principles of Data Value Evaluation from an Optimization Perspective

Deep Insight: Why the Optimization Perspective Is More Effective

Data value should be based on contributions to the optimization process, not superficial semantics. High-value samples have gradient directions consistent with the optimal direction, while low-value samples conflict. OST estimates gradient contributions via single-step simulation, which is more accurate than LLM-as-a-Judge.

Summary

OST redefines data selection as an incremental optimization utility problem, providing an efficient and interpretable solution and offering a new perspective for data engineering.

6

Section 06

Applications and Outlook: Practical Value and Future Directions of the OST Framework

Practical Applications and Promotion Value

OST is applicable to scenarios such as synthetic data filtering, curriculum learning, active learning, and multi-task learning, with interpretability guiding data collection strategies.

Outlook

As model scales grow, the importance of data selection becomes prominent. The optimization-driven approach represented by OST is expected to become a standard paradigm in data engineering, helping to maximize training benefits under limited budgets.