Zing Forum

Reading

llm-d Router: An Intelligent Traffic Scheduling and Routing System for Large Model Inference

An in-depth analysis of the llm-d Router project, an intelligent routing system designed specifically for large-scale LLM inference services, supporting KV cache-aware routing, request priority management, and a decoupled Prefill/Decode architecture.

llm-dLLM推理KubernetesGateway APIKV缓存分离式推理智能路由Envoy大模型部署
Published 2026-05-15 00:12Recent activity 2026-05-15 00:19Estimated read 7 min
llm-d Router: An Intelligent Traffic Scheduling and Routing System for Large Model Inference
1

Section 01

Main Floor | llm-d Router: Guide to the Intelligent Routing System for Large Model Inference

Main Floor Guide

llm-d Router is an implementation project of the Gateway API Inference Extension (GIE) in the Kubernetes ecosystem, an intelligent routing system designed specifically for large-scale LLM inference services. Its core value lies in optimizing request scheduling by deeply understanding LLM inference mechanisms (such as KV cache reuse and differences between Prefill/Decode phases), supporting KV cache-aware routing, request priority management, and a decoupled inference architecture, acting as the "intelligent brain" of inference services.

2

Section 02

Background | Core Challenges in LLM Inference Scheduling

Challenges in LLM Inference Scheduling

With the widespread deployment of LLMs in production environments, the performance and efficiency of inference services have become key issues. Traditional load balancers cannot fully leverage the unique characteristics of LLM inference (such as KV cache reuse and differences in computational characteristics between Prefill and Decode phases), leading to resource waste and performance bottlenecks. llm-d Router was created to address these problems.

3

Section 03

Core Architecture and Methods

Core Architecture Components

  1. Endpoint Picker (EPP):An intelligent routing engine that selects the optimal Pod by evaluating the state of the InferencePool (KV cache locality, load, request priority), supporting two modes: Standalone (self-managed Envoy+EPP) and Gateway (K8s Gateway API integration).
  2. Request Management API:Includes InferenceObjective (configures scheduling goals) and InferenceModelRewrite (supports A/B testing and canary releases).
  3. Decoupled Inference Sidecar:Coordinates multi-stage inference lifecycles (e.g., P/D, E/P/D), manages KV cache and embedding vector transmission.

Plug-in Architecture

Uses a filter-scorer-fetcher architecture:

  • Filter:Excludes ineligible Pods (model compatibility, resource usage, etc.);
  • Scorer:Performs weighted scoring on filtered Pods (based on KV cache reuse, load, session affinity);
  • Fetcher:Collects metric data and injects it into shared storage for use by the scorer.
4

Section 04

Key Mechanisms: Cache and Decoupled Inference

KV Cache-Aware Routing

Implements a precise prefix cache scoring mechanism. By analyzing the matching degree between the request text and existing KV cache, it prioritizes routing to the Pod with the longest prefix match, reducing redundant computations. It supports configurable block size (blockSize) and maximum number of prefix blocks to match (maxPrefixBlocksToMatch).

Decoupled Inference Support

  1. Prefill/Decode Decoupling (P/D):Separates prompt processing (Prefill) and Token generation (Decode) into different Pods, optimizing resources by leveraging the differences in computational characteristics of the two phases.
  2. Experimental E/P/D Decoupling:Supports multimodal inference. The Encode Pod processes multimodal inputs (e.g., images), the Prefill handles prompts, and the Decode generates outputs, with queue and memory management coordinated by the vLLM Sidecar.
5

Section 05

Application Value and Scenarios

Practical Application Value

  • Improve Cache Hit Rate:Reduce redundant computations and lower inference costs;
  • Optimize Resource Utilization:The decoupled architecture configures the most suitable hardware for Prefill/Decode;
  • Flexible Traffic Management:Supports A/B testing, canary releases, and priority scheduling;
  • Multi-Cloud Compatibility:Adapts to self-managed proxies (Istio, AgentGateway) and cloud-hosted services (Google Cloud ALB).
6

Section 06

Community Participation and Future Outlook

Community Participation

llm-d Router is an active open-source project. It holds biweekly meetings (10 AM PDT every Wednesday) and communicates via Slack #sig-router. Contributors are welcome to participate (for major changes, please create an Issue for discussion first).

Future Outlook

llm-d Router represents the evolutionary direction of LLM inference infrastructure, upgrading traditional stateless load balancing to inference-aware scheduling. As multimodal models and longer context windows become widespread, decoupled inference and intelligent routing will become standard. Its plug-in architecture and deep integration with the K8s ecosystem make it a strong candidate for building next-generation AI infrastructure.