Section 01
[Introduction] MLLM-Shap: Injecting Interpretability into Multimodal Large Models Using Shapley Values
The Data Science Undergraduate Program at Warsaw University of Technology proposes the MLLM-Shap method, which introduces the concept of Shapley values from game theory into the field of multimodal large language models (MLLMs). It aims to solve the black-box problem of MLLMs and provide an interpretability analysis tool. This method focuses on feature attribution, addresses challenges in multimodal scenarios, and helps with model debugging, bias detection, and user trust building. It is an innovative attempt to combine classical XAI theory with cutting-edge models.