# OST: A New Framework for Data Selection in Multimodal Models Based on Incremental Optimization Utility

> This article introduces the One-Step-Train (OST) framework, which redefines data selection as an incremental optimization utility ranking problem. By simulating a single-step update on a lightweight proxy model to estimate the marginal utility of each sample, OST reduces training costs by 43% while outperforming the LLM-as-a-Judge baseline by 1.8 points.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-08T09:28:26.000Z
- 最近活动: 2026-05-11T03:26:32.211Z
- 热度: 74.0
- 关键词: 数据选择, 多模态模型, 增量优化, 合成数据, LLM-as-a-Judge, 训练效率, 边际效用
- 页面链接: https://www.zingnex.cn/en/forum/thread/ost
- Canonical: https://www.zingnex.cn/forum/thread/ost
- Markdown 来源: floors_fallback

---

## Introduction: OST Framework—An Optimized New Solution for Data Selection in Multimodal Models

This article introduces the One-Step-Train (OST) framework, which redefines data selection as an incremental optimization utility ranking problem. By simulating a single-step update on a lightweight proxy model to estimate the marginal utility of each sample, OST reduces training costs by 43% while outperforming the LLM-as-a-Judge baseline by 1.8 points, providing an efficient and interpretable new solution for multimodal model training.

## Background: Dilemmas of Synthetic Data and Limitations of Existing Methods

## Dilemmas of Synthetic Data

Large-scale multimodal models (LMMs) rely on high-quality training data, but synthetic data contains noise and low-quality samples, which wastes resources and may lead to performance degradation. Traditional heuristic rules or manual screening are costly and struggle to capture deep value; the LLM-as-a-Judge method has extremely high computational costs and lacks interpretability.

## Methodology: Core Ideas and Technical Implementation of the OST Framework

## Core Idea: Data Value from an Optimization Perspective

OST treats data selection as an incremental optimization utility ranking problem, directly estimating the actual contribution of each sample to model training. Its advantages include direct optimization of objectives, high computational efficiency, and strong interpretability.

## Technical Implementation: Single-Step Simulation and Utility Estimation

1. **Proxy Model Selection**: Use a lightweight model to reflect the basic impact of data on training;
2. **Single-Step Update Simulation**: Perform a single-step gradient update for each sample and measure performance changes to obtain marginal utility;
3. **Utility Ranking and Selection**: Rank samples by marginal utility and select the top subset for training.

## Experimental Evidence: Performance and Efficiency Advantages of the OST Framework

## Experimental Results: Pareto-Optimal Efficiency

- **Cost and Performance Optimization**: Selecting the top 50% of data reduces training costs by 43% and time by 17%, while outperforming the LLM-as-a-Judge baseline by 1.8 points;
- **Fixed Budget Performance**: With the top 20% of data, it outperforms the LLM-as-a-Judge baseline by 5.6 points, as well as the DEITA and Full-SFT baselines;
- **Noise Identification**: Effectively eliminates negative transfer from toxic samples, especially suitable for complex reasoning tasks.

## Conclusion: Principles of Data Value Evaluation from an Optimization Perspective

## Deep Insight: Why the Optimization Perspective Is More Effective

Data value should be based on contributions to the optimization process, not superficial semantics. High-value samples have gradient directions consistent with the optimal direction, while low-value samples conflict. OST estimates gradient contributions via single-step simulation, which is more accurate than LLM-as-a-Judge.

## Summary

OST redefines data selection as an incremental optimization utility problem, providing an efficient and interpretable solution and offering a new perspective for data engineering.

## Applications and Outlook: Practical Value and Future Directions of the OST Framework

## Practical Applications and Promotion Value

OST is applicable to scenarios such as synthetic data filtering, curriculum learning, active learning, and multi-task learning, with interpretability guiding data collection strategies.

## Outlook

As model scales grow, the importance of data selection becomes prominent. The optimization-driven approach represented by OST is expected to become a standard paradigm in data engineering, helping to maximize training benefits under limited budgets.
