Section 01
[Main Floor] ZeroUnlearn: Introduction to the Novel Few-Shot Knowledge Unlearning Method for Large Language Models
ZeroUnlearn, proposed by the research team from Xiamen University, is an innovative few-shot knowledge unlearning method that can efficiently remove specific knowledge from large language models with only a very small number of samples while preserving the model's overall performance. This method addresses the pain points of traditional knowledge unlearning techniques, such as high resource consumption and long time requirements, and is applicable to multiple scenarios like privacy protection and copyright compliance. The related paper has been accepted by ICML 2026.