Section 01
[Introduction] TrustgameLLM Study: Do Large Language Models 'Treat People Differently Based on Social Identity'?
This study uses the classic trust game experimental framework to systematically examine whether large language models (LLMs) exhibit differentiated strategies based on social identities such as gender and nationality in interactive decision-making. The results show that LLMs do adjust their cooperative behaviors according to the social identities of virtual opponents, revealing potential bias patterns in training data, which is of great significance for AI fairness research.