Section 01
[Introduction] LLM-guided Semantic Guidance Framework: Enabling Tsetlin Machines to Have Both BERT-level Performance and Interpretability
This article proposes an innovative semantic guidance framework that transfers LLM knowledge to the symbolic model Tsetlin Machine (TM), solving the dilemma where pre-trained language models (such as BERT) have strong semantic capabilities but lack interpretability, while symbolic models are interpretable but have weak semantic generalization. This framework achieves BERT-level text classification performance while maintaining full symbolization and efficiency, making it suitable for high-risk fields like healthcare and law, and providing a new paradigm for interpretable AI.