Zing Forum

Reading

llm-d Inference Scheduler: An Intelligent Routing System for Large Model Inference Requests Based on Kubernetes Gateway API

This article introduces the llm-d Inference Scheduler, a Kubernetes-native scheduling system designed specifically for large language model (LLM) inference. Built on the Gateway API Inference Extension (GIE), the system implements intelligent routing decisions through pluggable filters, scorers, and fetchers, supporting advanced features such as multi-model deployment, KV cache locality optimization, and Prefill/Decode separation.

KubernetesGateway API大模型推理智能路由KV 缓存Prefill/Decode 分离Envoy负载均衡多模型部署云原生
Published 2026-04-03 00:12Recent activity 2026-04-03 00:23Estimated read 6 min
llm-d Inference Scheduler: An Intelligent Routing System for Large Model Inference Requests Based on Kubernetes Gateway API
1

Section 01

[Introduction] llm-d Inference Scheduler: A Cloud-Native Intelligent Routing System for LLM Inference

The llm-d Inference Scheduler is an intelligent routing system for large model inference requests built on the Kubernetes Gateway API. Addressing routing challenges in large-scale LLM inference deployments (e.g., traditional load balancing cannot leverage KV cache reuse, Prefill/Decode separation, etc.), it implements intelligent routing decisions via pluggable filters, scorers, and fetchers. It supports advanced features like multi-model deployment, KV cache locality optimization, and Prefill/Decode separation, providing enterprise-grade scheduling capabilities for production-level LLM inference services.

2

Section 02

Project Background and Core Objectives

In production deployments of large-scale LLM inference services, traditional load balancing strategies struggle to fully utilize LLM inference features (e.g., KV cache reuse, Prefill/Decode phase separation, heterogeneous hardware support). The llm-d Inference Scheduler is built on the Kubernetes Gateway API Inference Extension (GIE), with core objectives including: intelligent routing (multi-dimensional scheduling to optimal Pods), multi-model support (parallel deployment of multiple models in the same cluster), heterogeneous hardware adaptation (deploying different models on different hardware), runtime extensibility (pluggable components for custom scheduling logic), and community alignment (integration with GIE and Envoy ecosystems).

3

Section 03

Architecture Design: Envoy+EPP Two-Layer Architecture

llm-d adopts a data plane/control plane separation architecture:

  • Data Plane: Envoy gateway, which communicates with the control plane via the External Processing (ext-proc) mechanism to execute routing decisions without affecting data path performance.
  • Control Plane: EPP (Endpoint Picker), an extended EPP implementation of GIE that supports Prefill/Decode separation and intervenes in the request flow via ext-proc to select the optimal Pod.
  • Optional Component: BBR (Body Based Routing), which identifies the target model based on the request body and determines the InferencePool to route to.
4

Section 04

Core Scheduling Mechanism: Filters, Scorers, and Fetchers

Routing decisions are collaboratively made by three types of pluggable components:

  • Filters: Exclude ineligible Pods, such as those failing model compatibility, resource usage, health status, or custom logic checks.
  • Scorers: Score candidate Pods, combining scores by weight (e.g., KV cache locality, session affinity, load balancing, model metadata scoring).
  • Fetchers: Collect Pod metadata and runtime metrics, maintaining a shared data store for scorers to query.
5

Section 05

Advanced Features and Extensibility

  • Prefill/Decode Separation: Schedules the Prefill (high computation, high parallelism) and Decode (KV cache-dependent, latency-sensitive) phases of LLM inference to specially optimized Pods, with experimental support for E/P/D three-phase separation.
  • Pluggable Architecture: Supports plugin intervention via lifecycle hooks (Pre-call, Scoring, Post-choice, After-response); a configuration-driven plugin system (parameterized via YAML configuration) allows scheduling strategies to be adjusted without downtime.
6

Section 06

Application Scenarios and Value

llm-d is suitable for scenarios such as multi-model services, high-throughput inference, heterogeneous hardware deployment, cost optimization, and production-level reliability. By combining the unique needs of LLM inference with the standard capabilities of the Kubernetes Gateway API, it provides a powerful, scalable, cloud-native scheduling solution for production-level LLM services.