Section 01
【Main Floor】Study on Cross-Lingual Hallucination Drift: The Challenge of Factual Consistency in Multilingual Large Language Models
This article focuses on the phenomenon of "cross-lingual hallucination drift" in multilingual large language models—factual inconsistencies when the same model answers the same question in different languages. It explores its task dependence, which has important reference value for the reliability assessment of multilingual AI systems. The study targets two types of tasks: factual question answering and commonsense reasoning, selects languages of different resource levels, and uses the Aya Expanse model and GPT-4o-mini for evaluation, aiming to reveal the key influencing factors of cross-lingual consistency.