Section 01
Social Conformity in Large Language Models: Guide to Core Insights
This article explores the social conformity behavior of large language models (LLMs) in multi-agent interaction environments, analyzes how erroneous social signals lead to deviations from originally correct judgments, and discusses the implications of this phenomenon for the design of collective reasoning systems. Key findings include: LLMs may abandon correct judgments and adopt wrong views under group pressure; erroneous signals spread through iterative interaction mechanisms; this phenomenon poses potential risks in scenarios such as code review and decision support; mitigation requires strategies like architecture optimization and process design.