Section 01
[Introduction] Intersectional Fairness Study Reveals Mainstream LLMs Exhibit Significant Biases in Racial-Gender Intersectional Dimensions
This study systematically evaluates the intersectional fairness of mainstream LLMs. Key findings: 1. In ambiguous contexts, models respond conservatively but lack sufficient information in fairness metrics; 2. In explicit contexts, accuracy is affected by consistency with stereotypes; 3. Biases in racial-gender intersectional dimensions are particularly prominent. The study emphasizes the critical significance of an intersectional perspective for AI fairness.