Zing Forum

Reading

Study on the Default Mode of Large Language Models: What Happens When AI Thinks Freely

A groundbreaking study reveals that when large language models (LLMs) generate content freely without constraints, they converge to specific "attractor states"—each model has its unique default thinking mode. Through experiments involving 2261 sessions and 25 instances, the research team found that self-reflection can suppress repetition at the content level, but it is difficult to change the inherent patterns at the style level.

大语言模型默认模式吸引子状态AI行为研究自我反思提示工程生成多样性ClaudeGPT-4Llama
Published 2026-04-20 22:15Recent activity 2026-04-20 22:18Estimated read 6 min
Study on the Default Mode of Large Language Models: What Happens When AI Thinks Freely
1

Section 01

Introduction: Core Findings of the Study on LLM Default Modes

A groundbreaking study explores the behavioral patterns of large language models when generating content freely without constraints, finding that each model converges to a unique "attractor state" (default thinking mode). Through experiments with 2261 sessions and 25 instances, the study reveals that self-reflection can suppress content repetition but is hard to change style tendencies, while self-evolving prompts can enhance output diversity. This finding is of great significance for understanding AI behavior mechanisms, prompt engineering, and system design.

2

Section 02

Research Background and Motivation: Exploring AI's "Resting State"

What natural behaviors do LLMs exhibit when not constrained by specific tasks? The research team analogizes the "default mode network" in human neuroscience and hypothesizes that LLMs have inherent generation tendencies when lacking external drives. This question touches on the essence of AI behavior; if valid, it will affect the understanding of AI biases and creative abilities.

3

Section 03

Experimental Design and Methods: Multi-Model Comparison and Dynamic Sessions

The study selected four mainstream model families: Claude Opus3, Claude Sonnet4.6, GPT-4.1, and Llama3.3 70B, conducting 2261 "DMN sessions" (unconstrained generation) distributed across 25 independent instances, with prompts evolving over time. Technical implementations include the core generation pipeline (dmn.py), evolution agent (evolve.py), analysis tools (analyse.py), and chart generation (figures.py).

4

Section 04

Core Findings: Model-Specific Attractor States

The study found that each model converges to a stable model-specific resting state, with a classifier achieving 98.8% accuracy in identifying "empty sessions". The default mode is reflected in two aspects: at the content level, it tends to repeat specific types of content (such as technical explanations or philosophical thinking); at the style level, it has stable sentence structures, vocabulary, and paragraph organization features, which can serve as a model's "fingerprint".

5

Section 05

Self-Evolving Prompts: Enhancing Output Diversity

The research team implemented a self-evolving prompt infrastructure that dynamically adjusts generation prompts based on past sessions. Results show that this mechanism can increase output diversity by 10% to 156%, indicating that dynamic adaptive prompt strategies can effectively counteract the inherent tendencies of models.

6

Section 06

Limitations of Self-Reflection: Style Tendencies Are Hard to Change

The self-reflection mechanism can suppress 60% to 87% of repetitive patterns at the content level, but only 31% at the style level. This indicates that the style tendencies of LLMs are more deeply ingrained than content tendencies and are difficult to change through simple self-supervision.

7

Section 07

Implications for AI Design: Balancing Diversity and Consistency

  1. Predictability of default behavior: Helps explain AI behavior but may limit creative tasks; 2. Importance of prompt engineering: Dynamic adaptive strategies are effective; 3. Balance between diversity and consistency: Need to find an appropriate point between the two; 4. Improvement of evaluation metrics: Need to consider both content and style dimensions.
8

Section 08

Future Directions and Conclusion: Understanding the Inherent Laws of AI

Future research directions include cross-model comparisons, optimization of intervention strategies, application-oriented research, and theoretical framework construction. The study reveals that LLMs have inherent laws even when generating freely, providing a theoretical basis for AI system design and collaboration, and serving as an important starting point for understanding the boundaries of AI capabilities and making improvements.