Section 01
Introduction: Exploring New Paths of Knowledge Distillation in NLI
This article addresses the "black box" problem and shallow reasoning limitations of Natural Language Inference (NLI) models. It studies how to transfer the reasoning capabilities of human explanations and LLM chain-of-thought to efficient encoder models via knowledge distillation, compares their effects, and explores hybrid strategies, providing a new direction for interpretable NLI.