Section 01
EquiCaste Project Introduction: Auditing Caste Bias in LLMs via Paired Communication Research
The EquiCaste project focuses on auditing caste bias in large language models (LLMs), using paired communication research methods derived from sociology to provide a rigorous practical example for AI fairness assessment. This study not only reveals implicit caste bias in LLMs but also provides references for model improvement, policy formulation, and user empowerment, representing an important advancement in AI ethics and fairness research.