Section 01
Study on Bias in Large Language Model Peer Review: A Technical Examination of Academic Fairness (Main Floor Introduction)
The oamin-ai team conducted a systematic study on dimensions such as institutional prestige bias and racial bias of large language models (LLMs) in academic peer review through controlled experiments, revealing potential risks of AI-assisted academic evaluation systems and proposing improvement directions. The study emphasizes that technological progress must balance efficiency and fairness, providing important references for AI ethics and academic justice.