Section 01
[Main Floor] Introduction to Impact of Missing Data on Model Inference: A Stability Study Based on Explainable AI
This study focuses on the prevalent missing data problem in the real world, systematically exploring its impact on the decision stability of machine learning models through explainable AI (XAI) techniques. Core viewpoints include: missing data may not only reduce model accuracy but also lead to unstable inference processes; high prediction accuracy does not necessarily mean reliable explanations; missing patterns (rather than just proportions) have a greater impact on models; there are significant differences in the robustness of different types of models to missing data. The study provides a theoretical basis and practical framework for data quality assessment and trustworthy AI construction.