Section 01
Introduction: Reasoning Consistency in Knowledge Distillation—Are Compressed Models 'Thinking' Correctly?
This article focuses on a core overlooked issue in knowledge distillation technology: when the compressed student model and the teacher model give the same answer, do they rely on the same reasoning logic? Through three dimensions—GradCAM saliency map comparison, CKA representation alignment, and calibration analysis—it reveals the key phenomenon of decoupling between accuracy and reasoning consistency, providing a new perspective for model evaluation in edge AI deployment.