Section 01
[Introduction] How Government Media Control Shapes Large Language Models: Analysis of Core Connections
This article explores the mechanism of how government media control impacts large language models (LLMs), analyzes how training data sources and differences in information ecosystems lead AI systems to exhibit specific values and knowledge biases, and discusses the technical and social implications of this issue. The research focuses on the state-media-influence-llm project, revealing the deep connection between information ecosystems and AI systems, emphasizing that LLMs are not neutral tools— the "geopolitics" of their training data directly shapes cognitive maps, and attention needs to be paid to data diversity and the construction of information ecosystems.