Zing Forum

Reading

fast-slow-llm: A Dual-System Intelligent Routing Gateway Inspired by Cognitive Science

fast-slow-llm is an LLM gateway system inspired by Daniel Kahneman's *Thinking, Fast and Slow*. It dynamically routes queries via intelligent routing to either the fast and low-cost System 1 model or the deep-reasoning System 2 model, achieving up to 99% API cost savings while maintaining response quality.

大语言模型智能路由成本优化双系统理论System 1System 2推理模型API 网关
Published 2026-05-09 22:38Recent activity 2026-05-09 22:51Estimated read 6 min
fast-slow-llm: A Dual-System Intelligent Routing Gateway Inspired by Cognitive Science
1

Section 01

fast-slow-llm: Introduction to the Dual-System Intelligent Routing Gateway

fast-slow-llm is an LLM gateway system inspired by the dual-system theory from Daniel Kahneman's Thinking, Fast and Slow. It dynamically routes queries via intelligent routing to either the fast and low-cost System 1 model or the deep-reasoning System 2 model, achieving up to 99% API cost savings while maintaining response quality.

2

Section 02

Background: Inspiration from the Dual-System Theory in Cognitive Science

Daniel Kahneman proposed the dual-system theory of human thinking: System 1 is fast, intuitive, and automatic; System 2 is slow, rational, and requires effort. fast-slow-llm applies this concept to the LLM reasoning architecture. Since not all queries need deep reasoning—simple questions can be handled by lightweight models, while complex tasks require strong reasoning models—a smart routing layer is built to distinguish between the two types of queries.

3

Section 03

Methodology: System Architecture and Workflow

The core is a routing agent that evaluates query complexity (on a scale of 1-10): queries with a score ≤5 are routed to System1 (lightweight, low-cost model with fast response); those with a score >5 are routed to System2 (strong reasoning models like o1-preview). The response from System1 is checked by a quality evaluator; if the score is below 0.6, it falls back to System2 to ensure quality.

4

Section 04

Methodology: Core Features

  1. Intelligent complexity classification: The routing agent judges complexity using prompts and evaluation criteria (ambiguity, reasoning steps, professionalism, ambiguity)—more accurate than keyword matching; 2. Cost savings: Uses a shadow ledger to simulate costs, with System1 priced like gpt-4o-mini and System2 like o1-preview, saving up to 99.9% on simple queries; 3. Hallucination prevention: System2 tends to over-expand and generate hallucinations; routing simple queries to System1 avoids this risk; 4. Real-time metric tracking: Monitors token usage, cost, and latency; 5. Comparison mode: Displays the routing result vs. always using expensive models.
5

Section 05

Evidence: Real-World Case Analysis

Case1: Simple query "What payment methods do you accept?"—System1's response was concise and accurate (18 tokens, cost $0.000008), while System2's response was overly detailed and contained hallucinations (380 tokens, $0.0285), saving 99.9%. Case2: A complex multi-step problem was correctly identified and routed to System2, resulting in a high-quality outcome.

6

Section 06

Technical Implementation Details

Built on LangChain and LangSmith, it supports local model deployment (Ollama) and commercial API calls (OpenAI, etc.). With a modular design, components can be independently configured and replaced; the routing agent, evaluator, and system backend have clear interfaces for easy customization.

7

Section 07

Application Scenarios and Value

Suitable for customer service automation (reducing costs for simple queries), content moderation (quickly filtering edge cases to pass to strong models), intelligent assistants (dynamically adjusting processing capabilities), and enterprise knowledge bases (optimizing resource allocation).

8

Section 08

Summary and Outlook

fast-slow-llm translates cognitive science insights into engineering solutions, providing ideas for LLM cost optimization. The core insight is to intelligently allocate resources instead of using the strongest model for all problems. In the future, such intelligent routing architectures may become standard configurations for LLM applications, balancing performance and cost.