Section 01
[Introduction] When AI Processes Public Opinions: Do Large Models Have Systemic Occupational Bias Against Grassroots Voices?
Core point: A large-scale controlled experiment on 8 federally available LLMs found that occupation is the only identity signal leading to consistent differential treatment. When the same comment is attributed to a street vendor instead of a financial analyst, the summary loses more original meaning, uses simpler language, and shifts emotional tone. This study focuses on the fairness issue of AI processing public comments in the U.S. federal 'Notice and Comment' mechanism, revealing potential systemic occupational bias in AI systems, which has important implications for the equality of democratic participation.