Section 01
[Introduction] Nationality Bias in Large Language Models: Representational Harm to the Global Majority
Recent research reveals the systemic nationality bias in mainstream large language models (LLMs) when generating narratives. The national identities of non-Western countries are severely stereotyped and marginalized, with the probability of negative portrayals being more than 50 times higher than positive ones. This study focuses on the cultural bias issue of LLMs, exploring its background, methodology, findings, and response strategies, aiming to reveal the representational harm faced by the Global Majority in AI narratives.