# fast-slow-llm: A Dual-System Intelligent Routing Gateway Inspired by Cognitive Science

> fast-slow-llm is an LLM gateway system inspired by Daniel Kahneman's *Thinking, Fast and Slow*. It dynamically routes queries via intelligent routing to either the fast and low-cost System 1 model or the deep-reasoning System 2 model, achieving up to 99% API cost savings while maintaining response quality.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-09T14:38:25.000Z
- 最近活动: 2026-05-09T14:51:46.516Z
- 热度: 159.8
- 关键词: 大语言模型, 智能路由, 成本优化, 双系统理论, System 1, System 2, 推理模型, API 网关
- 页面链接: https://www.zingnex.cn/en/forum/thread/fast-slow-llm
- Canonical: https://www.zingnex.cn/forum/thread/fast-slow-llm
- Markdown 来源: floors_fallback

---

## fast-slow-llm: Introduction to the Dual-System Intelligent Routing Gateway

fast-slow-llm is an LLM gateway system inspired by the dual-system theory from Daniel Kahneman's *Thinking, Fast and Slow*. It dynamically routes queries via intelligent routing to either the fast and low-cost System 1 model or the deep-reasoning System 2 model, achieving up to 99% API cost savings while maintaining response quality.

## Background: Inspiration from the Dual-System Theory in Cognitive Science

Daniel Kahneman proposed the dual-system theory of human thinking: System 1 is fast, intuitive, and automatic; System 2 is slow, rational, and requires effort. fast-slow-llm applies this concept to the LLM reasoning architecture. Since not all queries need deep reasoning—simple questions can be handled by lightweight models, while complex tasks require strong reasoning models—a smart routing layer is built to distinguish between the two types of queries.

## Methodology: System Architecture and Workflow

The core is a routing agent that evaluates query complexity (on a scale of 1-10): queries with a score ≤5 are routed to System1 (lightweight, low-cost model with fast response); those with a score >5 are routed to System2 (strong reasoning models like o1-preview). The response from System1 is checked by a quality evaluator; if the score is below 0.6, it falls back to System2 to ensure quality.

## Methodology: Core Features

1. Intelligent complexity classification: The routing agent judges complexity using prompts and evaluation criteria (ambiguity, reasoning steps, professionalism, ambiguity)—more accurate than keyword matching; 2. Cost savings: Uses a shadow ledger to simulate costs, with System1 priced like gpt-4o-mini and System2 like o1-preview, saving up to 99.9% on simple queries; 3. Hallucination prevention: System2 tends to over-expand and generate hallucinations; routing simple queries to System1 avoids this risk; 4. Real-time metric tracking: Monitors token usage, cost, and latency; 5. Comparison mode: Displays the routing result vs. always using expensive models.

## Evidence: Real-World Case Analysis

Case1: Simple query "What payment methods do you accept?"—System1's response was concise and accurate (18 tokens, cost $0.000008), while System2's response was overly detailed and contained hallucinations (380 tokens, $0.0285), saving 99.9%. Case2: A complex multi-step problem was correctly identified and routed to System2, resulting in a high-quality outcome.

## Technical Implementation Details

Built on LangChain and LangSmith, it supports local model deployment (Ollama) and commercial API calls (OpenAI, etc.). With a modular design, components can be independently configured and replaced; the routing agent, evaluator, and system backend have clear interfaces for easy customization.

## Application Scenarios and Value

Suitable for customer service automation (reducing costs for simple queries), content moderation (quickly filtering edge cases to pass to strong models), intelligent assistants (dynamically adjusting processing capabilities), and enterprise knowledge bases (optimizing resource allocation).

## Summary and Outlook

fast-slow-llm translates cognitive science insights into engineering solutions, providing ideas for LLM cost optimization. The core insight is to intelligently allocate resources instead of using the strongest model for all problems. In the future, such intelligent routing architectures may become standard configurations for LLM applications, balancing performance and cost.
