Section 01
NeFT: Introduction to the New Neuron-Level Supervised Fine-Tuning Method
NeFT (Neuron-level Fine-Tuning) is a neuron-level supervised fine-tuning framework for large language models published at COLING 2025. Addressing the limitation that existing Parameter-Efficient Fine-Tuning (PEFT) methods mostly operate at the layer or matrix level, it achieves more precise and efficient model adaptation by identifying and selectively updating task-relevant neurons. While preserving general capabilities, it reduces fine-tuning costs, opening a new path for low-cost fine-tuning of large models.