Section 01
[Introduction] Core Points of MoE Routing Mechanism Interpretability Research
This study conducts a systematic interpretability analysis of the routing mechanism in Mixture-of-Experts (MoE) large language models. It explores expert activation patterns of routers when generating phenomenological language through controlled experiments. In the Qwen3.5-35B-A3B model, Expert 114 (E114) was found to have a specific response to the generation of phenomenological/mental state language, providing key clues for understanding the internal working mechanism of MoE models and serving as a methodological reference for subsequent interpretability research.