Zing 论坛

正文

自适应LLM路由系统:在成本与准确率之间寻找最优解

该项目提出了一种创新的大小模型协同架构,通过置信度信号智能路由查询,在保证准确率的同时将大模型调用量减少约70%,为本地部署场景提供了经济高效的问答解决方案。

LLM路由成本优化小语言模型置信度估计问答系统模型协同开源项目
发布时间 2026/04/13 19:41最近活动 2026/04/13 19:50预计阅读 4 分钟
自适应LLM路由系统:在成本与准确率之间寻找最优解
1

章节 01

Adaptive LLM Routing System: Core Idea and Value Proposition

The adaptive LLM routing system is an innovative architecture that coordinates large language models (LLM) and small language models (SLM) to balance cost and accuracy. By using confidence signals to intelligently route queries, it reduces LLM calls by about 70% while maintaining competitive accuracy, providing an economical solution for local deployment scenarios.

2

章节 02

Background: Cost-Capability Contradiction Between LLM and SLM

LLMs like GPT-4 excel at complex reasoning but have high API costs and long latency. SLMs (1.5B-8B parameters) are low-cost and fast but lack reliability in complex tasks. Traditional approaches use LLMs for all queries, wasting resources on simple ones. The adaptive routing system addresses this by letting SLMs handle simple queries and upgrading to LLMs only when SLMs are uncertain.

3

章节 03

System Architecture and Confidence Signal Design

The system flow is: SLM → Confidence Signal → Routing Decision → (Return SLM Result / Upgrade to LLM). Three lightweight confidence signals are used:

  1. Answer length: Longer answers often indicate uncertainty.
  2. Token entropy: High entropy reflects model confusion in word choices.
  3. Log probability: Lower values mean lower confidence in the answer. These signals form the basis for routing decisions.
4

章节 04

Experimental Results: Cost Saving and Performance Insights

Evaluated on SQuAD, SQuAD v2, HotpotQA datasets:

  • Cost saving: Reduces LLM calls by ~70%, enabling 3x more queries under the same budget.
  • Performance:
    • SQuAD: Near LLM baseline performance.
    • SQuAD v2: Strong advantage in handling unanswerable questions.
    • HotpotQA: Weak in multi-hop reasoning, showing limitations in complex tasks.
5

章节 05

Limitations and Future Improvement Directions

Current limitations:

  1. Heuristic routing (hand-designed rules instead of learned models).
  2. Poor multi-hop reasoning performance.
  3. Confidence calibration issues for SLM and LLM.
  4. Limited to 1.5B-8B models. Future work:
  • Learning-based routing (e.g., logistic regression).
  • Self-consistency evaluation.
  • Testing larger models (14B-70B).
  • Semantic caching and speculative decoding.
6

章节 06

Industry Implications and Project Value

This system represents the trend of layered reasoning, applicable to:

  • Edge computing: SLM on edge devices, complex queries to cloud.
  • Enterprise apps: Local SLM for sensitive data, cloud LLM for general queries.
  • Multimodal systems: Light models for simple tasks, large models for complex ones. As an open-source project, it provides reusable frameworks, confidence signal implementations, experimental data, and improvement paths for developers aiming to reduce LLM costs.