Zing Forum

Reading

Adaptive LLM Routing System: Finding the Optimal Balance Between Cost and Accuracy

This project proposes an innovative collaborative architecture for large and small models. By intelligently routing queries using confidence signals, it reduces large model calls by approximately 70% while ensuring accuracy, providing a cost-effective Q&A solution for local deployment scenarios.

LLM路由成本优化小语言模型置信度估计问答系统模型协同开源项目
Published 2026-04-13 19:41Recent activity 2026-04-13 19:50Estimated read 4 min
Adaptive LLM Routing System: Finding the Optimal Balance Between Cost and Accuracy
1

Section 01

Adaptive LLM Routing System: Core Idea and Value Proposition

The adaptive LLM routing system is an innovative architecture that coordinates large language models (LLM) and small language models (SLM) to balance cost and accuracy. By using confidence signals to intelligently route queries, it reduces LLM calls by about 70% while maintaining competitive accuracy, providing an economical solution for local deployment scenarios.

2

Section 02

Background: Cost-Capability Contradiction Between LLM and SLM

LLMs like GPT-4 excel at complex reasoning but have high API costs and long latency. SLMs (1.5B-8B parameters) are low-cost and fast but lack reliability in complex tasks. Traditional approaches use LLMs for all queries, wasting resources on simple ones. The adaptive routing system addresses this by letting SLMs handle simple queries and upgrading to LLMs only when SLMs are uncertain.

3

Section 03

System Architecture and Confidence Signal Design

The system flow is: SLM → Confidence Signal → Routing Decision → (Return SLM Result / Upgrade to LLM). Three lightweight confidence signals are used:

  1. Answer length: Longer answers often indicate uncertainty.
  2. Token entropy: High entropy reflects model confusion in word choices.
  3. Log probability: Lower values mean lower confidence in the answer. These signals form the basis for routing decisions.
4

Section 04

Experimental Results: Cost Saving and Performance Insights

Evaluated on SQuAD, SQuAD v2, HotpotQA datasets:

  • Cost saving: Reduces LLM calls by ~70%, enabling 3x more queries under the same budget.
  • Performance:
    • SQuAD: Near LLM baseline performance.
    • SQuAD v2: Strong advantage in handling unanswerable questions.
    • HotpotQA: Weak in multi-hop reasoning, showing limitations in complex tasks.
5

Section 05

Limitations and Future Improvement Directions

Current limitations:

  1. Heuristic routing (hand-designed rules instead of learned models).
  2. Poor multi-hop reasoning performance.
  3. Confidence calibration issues for SLM and LLM.
  4. Limited to 1.5B-8B models. Future work:
  • Learning-based routing (e.g., logistic regression).
  • Self-consistency evaluation.
  • Testing larger models (14B-70B).
  • Semantic caching and speculative decoding.
6

Section 06

Industry Implications and Project Value

This system represents the trend of layered reasoning, applicable to:

  • Edge computing: SLM on edge devices, complex queries to cloud.
  • Enterprise apps: Local SLM for sensitive data, cloud LLM for general queries.
  • Multimodal systems: Light models for simple tasks, large models for complex ones. As an open-source project, it provides reusable frameworks, confidence signal implementations, experimental data, and improvement paths for developers aiming to reduce LLM costs.