Zing Forum

Reading

xLLM: JD Open-Source High-Performance Large Model Inference Engine and Domestic AI Chip Optimization Practice

xLLM is a high-performance LLM inference framework open-sourced by JD, specifically optimized for domestic AI accelerators. Through core technologies such as service-engine decoupling architecture, full-graph pipeline execution, dynamic shape graph optimization, and global KV Cache management, it enables enterprise-level high-throughput, low-latency distributed inference services.

LLM InferenceAI Accelerator国产芯片京东高性能计算KV CacheSpeculative DecodingMoEDeepSeekQwen
Published 2026-03-31 16:15Recent activity 2026-03-31 16:33Estimated read 8 min
xLLM: JD Open-Source High-Performance Large Model Inference Engine and Domestic AI Chip Optimization Practice
1

Section 01

xLLM: Guide to JD's Open-Source High-Performance Large Model Inference Engine and Domestic AI Chip Optimization Practice

xLLM is a high-performance LLM inference framework open-sourced by JD, deeply optimized for domestic AI accelerators. Through core technologies like service-engine decoupling architecture, full-graph pipeline execution, dynamic shape graph optimization, and global KV Cache management, it delivers enterprise-level high-throughput, low-latency distributed inference services. This framework has been widely deployed in JD's core retail businesses (intelligent customer service, risk control, supply chain optimization, advertising recommendation, etc.) and is a production-proven solution.

2

Section 02

Project Background

With the widespread application of large language models in enterprise core businesses, inference performance and cost have become key challenges. Especially in the domestic AI computing power ecosystem, how to fully leverage the performance of domestic AI accelerators and achieve efficient, low-cost model deployment is a practical problem faced by many enterprises. xLLM is exactly the open-source framework JD launched to address this need, and it has been implemented and verified in JD's core business scenarios.

3

Section 03

Core Architecture and Technical Approaches

Service-Engine Decoupling Architecture

  • Service Layer: Elastic scheduling of online/offline requests, dynamic PD separation (optimization for Prefill and Decode phases), hybrid EPD mechanism (for multimodal and high availability requirements)
  • Engine Layer: Multi-stream parallel computing, graph fusion optimization, speculative decoding (accelerated generation with small model drafts), MoE dynamic load balancing, global KV Cache management (hierarchical cache offloading and prefetching based on Mooncake)

Key Technical Features

  • Full-Graph Pipeline Execution: Three-layer asynchronous parallelism of request scheduling, model graph, and operator kernel to maximize resource utilization
  • Dynamic Shape Optimization: Parameterized adaptation, controlled tensor memory pool, integration of custom operators like PageAttention
  • Efficient Memory Management: Discrete physical and continuous virtual memory mapping, on-demand allocation, intelligent page scheduling
  • Algorithm Acceleration: Multi-core parallel speculative decoding, MoE expert dynamic load balancing
4

Section 04

Hardware and Model Ecosystem Support

Hardware Support Matrix

Hardware Type Example Models Notes
NPU A2, A3 HDK Driver 25.2.0+
MLU Cambricon Series -
ILU BI150 -
MUSA S5000 Muxi GPU
It also supports international mainstream hardware such as NVIDIA GPUs, providing a unified cross-platform experience.

Model Support

Day-0 support for mainstream large models including DeepSeek-V3.1, Qwen2/3 series, GLM-4.5/4.6/4.6V/4.7/5 series, VLM-R1, etc. Enterprises can flexibly choose models that fit their business needs.

5

Section 05

Enterprise-Level Deployment Verification

xLLM has been widely deployed in JD's core businesses, with:

  • High concurrency processing capability
  • 99.9%+ service availability
  • Millisecond-level response latency
  • Elastic scaling capability

Covered scenarios include intelligent customer service (complex dialogue and multi-turn interaction), risk control systems (real-time risk identification), supply chain optimization (demand forecasting and inventory management), and advertising recommendation (personalized content generation and ranking). The project team has published a technical report on arXiv, detailing the architecture design and implementation details.

6

Section 06

Open-Source Ecosystem and Collaboration

xLLM's development has benefited from several open-source projects: ScaleLLM (reference for graph construction and runtime), Mooncake (foundation for KV cache management), brpc (high-performance HTTP service), tokenizers-cpp (C++ tokenizer), and safetensors (secure loading of model weights).

It also collaborates with laboratory teams from universities such as Tsinghua University, University of Science and Technology of China, Beihang University, Peking University, and Tianjin University to promote industry-university-research integration.

7

Section 07

Summary and Outlook

xLLM represents an important advancement in inference frameworks for domestic AI accelerators, providing enterprises with high-performance, low-cost LLM deployment solutions through innovative technologies. For enterprises evaluating domestic AI computing power, it is a reliable production-proven choice; for developers focusing on inference optimization, its technical details are worth in-depth study. As the domestic AI chip ecosystem matures, frameworks like xLLM will play a more important role in the implementation of AI applications.