Zing Forum

Reading

SALMUBench: A Fine-Grained Evaluation Benchmark for Machine Unlearning in Multimodal Models

A paper accepted by CVPR 2026, proposing the first fine-grained evaluation benchmark for multimodal machine unlearning at the sensitive association level, revealing the dilemma between effectiveness and side effects of current unlearning methods.

机器遗忘多模态学习CLIP隐私保护对比学习CVPR 2026AI安全模型评估
Published 2026-03-27 19:33Recent activity 2026-03-30 18:19Estimated read 5 min
SALMUBench: A Fine-Grained Evaluation Benchmark for Machine Unlearning in Multimodal Models
1

Section 01

[Introduction] SALMUBench: A Fine-Grained Evaluation Benchmark for Machine Unlearning in Multimodal Models (Accepted by CVPR 2026)

This paper proposes SALMUBench, the first fine-grained evaluation benchmark for multimodal machine unlearning at the sensitive association level, revealing the dilemma between effectiveness and side effects of current unlearning methods. This benchmark provides a fine-grained evaluation tool for privacy protection and AI security research of multimodal models.

2

Section 02

Research Background and Problem Motivation

With the widespread application of multimodal contrastive learning models like CLIP, sensitive information removal has become a core issue in AI security. Existing machine unlearning research focuses on classification tasks and generative models, with insufficient exploration of unlearning mechanisms for contrastive learning encoders; moreover, the evaluation granularity is coarse, making it impossible to diagnose fine-grained association-level unlearning effects (e.g., unable to measure whether the model retains associations between other attributes of the target person and others' names).

3

Section 03

SALMUBench Benchmark Design

SALMUBench constructs a synthetic dataset containing 60,000 person-attribute associations and two base models, using a dual-model comparison architecture (the Clean model uses only retained data, while the Compromised model additionally includes sensitive association data); it introduces a structured retained set evaluation protocol, and measures unlearning effects in layers through Holdout Identity (detecting overgeneralization to irrelevant identities) and Holdout Association (evaluating association accuracy).

4

Section 04

Key Finding: The Dilemma of Unlearning Methods

Existing methods face the "effectiveness-side effect" dilemma: 1. Incomplete unlearning: Sensitive information is implicitly retained despite superficial unlearning, which may still be leaked through indirect queries; 2. Overgeneralization: Excessive erasure of associations impairs the overall utility of the model. Current methods struggle to balance security and practicality.

5

Section 05

Technical Details and Experimental Design

The dataset consists of 60,000 synthetic sensitive association data, ensuring reproducibility and ethical compliance; aiming at the cross-modal characteristics of contrastive learning encoders like CLIP (images and text are mapped to a unified embedding space), the evaluation protocol is optimized to coordinate the unlearning effects of dual encoders.

6

Section 06

Research Significance and Future Outlook

For academia: Establishes new evaluation standards, provides public datasets, models, and scripts to promote quantitative analysis; For industry: Reveals technical limitations and guides enterprises in evaluating unlearning solutions; Future directions: Develop precise unlearning algorithms, deepen knowledge storage mechanisms, dynamic evaluation, and cross-modal coordination.

7

Section 07

Conclusion

SALMUBench marks the entry of machine unlearning research into the fine-grained evaluation stage, and the revealed dilemma points the way for future breakthroughs. As multimodal models are applied in sensitive fields, precise machine unlearning technology will become the frontier of AI security.