Section 01
Anti-Distillation: A Defense Technology for Protecting Large Models from Knowledge Distillation via Adversarial Decoding
Anti-Distillation: A Defense Technology for Protecting Large Models from Knowledge Distillation via Adversarial Decoding
This project proposes a cross-model adversarial decoding method, which increases the difficulty of knowledge distillation from large models to small models during the post-training phase, providing a new technical approach for model intellectual property protection. It is worth noting that the purpose of this research is not to completely block knowledge transfer, but to increase the cost and difficulty of unauthorized distillation, giving model owners more control.