Section 01
[Introduction] Do Large Language Models "Cut Corners" Like Humans? Core Analysis of Dependency Length Minimization Research
A study explores whether large language models (LLMs) follow the principle of dependency length minimization (DLM) in human language. The core question is whether LLM-generated language optimizes syntax to reduce cognitive load like humans do. By comparing human corpora, LLM-generated texts, and random baselines, the study finds that LLMs do exhibit DLM, but the degree of optimization differs from that of humans. This research bridges computational linguistics and psycholinguistics, providing a new perspective for evaluating LLM language capabilities.