Zing Forum

Reading

LMCache: A Data Center-Scale KV Cache Layer That Reduces LLM Inference Latency by 3-10x

LMCache is a KV cache acceleration layer designed specifically for LLM services. Through cross-instance cache reuse, multi-level storage (GPU/CPU/disk/S3), and zero-copy technology, it achieves 3-10x latency reduction and GPU computation savings in multi-turn dialogue and RAG scenarios.

KV缓存LLM推理vLLMRAG缓存优化TTFT多级存储
Published 2026-04-04 06:43Recent activity 2026-04-04 06:51Estimated read 6 min
LMCache: A Data Center-Scale KV Cache Layer That Reduces LLM Inference Latency by 3-10x
1

Section 01

LMCache: A Data Center-Scale KV Cache Layer That Reduces LLM Inference Latency by 3-10x

LMCache is a KV cache acceleration layer designed specifically for LLM services. Its core advantages include cross-instance cache reuse, multi-level storage (GPU/CPU/disk/S3/NIXL), and zero-copy technology. In multi-turn dialogue and RAG scenarios, it can achieve 3-10x latency reduction and significant GPU computation savings, solving the waste problem of repeated context processing in traditional inference.

2

Section 02

Background: Pain Points of Repeated Computation in LLM Inference and Limitations of Existing Solutions

In LLM inference, Time to First Token (TTFT) is a key experience metric. However, in multi-turn dialogue and RAG scenarios, repeated context (such as system prompts and document fragments) needs to be processed repeatedly, leading to a lot of GPU computation waste (e.g., over 90% repetition in RAG). Existing solutions like vLLM's prefix cache have limitations: single-instance cache cannot be shared, memory bottlenecks, only supports prefix matching, and hit rates are often below 30%.

3

Section 03

Core Design of LMCache: Cross-Instance Sharing and Multi-Level Storage

LMCache treats KV cache as a shareable asset. Its core design includes:

  1. Cross-instance sharing: Achieves data center-wide cache reuse through distributed protocols, supporting P2P direct connection and central coordination modes;
  2. Multi-level storage: Covers GPU memory (hot data), CPU memory (zero-copy acceleration), local disk (persistence), S3 (cross-cluster), NIXL (RDMA cross-node);
  3. Arbitrary fragment reuse: Breaks through the limitation of prefix matching and supports cache retrieval of intermediate text fragments.
4

Section 04

Key Technologies: Zero-Copy, vLLM Integration, and CacheBlend

Technical highlights of LMCache:

  1. Zero CPU copy: Uses CUDA GDS and RDMA to enable direct communication between GPU and storage, eliminating CPU transfer latency;
  2. Deep integration with vLLM: Intercepts KV operations via Hook mechanism, writes to cache asynchronously in the Prefill phase, queries and reuses in the Decode phase, and manages cache based on LRU/LFU;
  3. CacheBlend: A knowledge fusion technology proposed in the EuroSys2025 paper, which intelligently fuses overlapping cache fragments to avoid repeated calculation of attention scores.
5

Section 05

Performance Benefits and Typical Application Scenarios

Actual tests show the benefits of combining LMCache with vLLM:

Scenario Latency Reduction GPU Savings
Multi-turn QA (10 turns) 5-8x 60-80%
RAG (100-page document +10 questions) 8-10x 70-90%
Code completion (long files) 3-5x 50-70%
Typical scenarios: Enterprise knowledge base Q&A, AI programming assistants, multi-agent collaboration, long context analysis (legal/medical/financial documents).
6

Section 06

Deployment and Usage: Simple Installation and Flexible Configuration

Installation: pip install lmcache The extension is automatically loaded when starting vLLM. Storage backends and cache policies can be specified via environment variables or configuration files. It supports deployment modes such as pure GPU, GPU+CPU hybrid, multi-level storage, and is compatible with the SGLang inference engine.

7

Section 07

Academic Background and Community Ecosystem

LMCache is developed by institutions such as the University of Chicago and the University of California, Berkeley. Its achievements are published in SIGCOMM2024 (CacheGen), EuroSys2025 (CacheBlend), and technical reports. It uses the Apache 2.0 license, and the community is active: bi-weekly meetings, Slack workspace, detailed documentation and examples, and it has been integrated with multiple LLM platforms.

8

Section 08

Conclusion: Paradigm Shift in LLM Inference Optimization

LMCache marks a paradigm shift in LLM inference optimization from 'faster computation' to 'less computation'. By upgrading KV cache to a manageable, shareable, and persistent infrastructure, it solves efficiency issues in long context and multi-instance scenarios. As long-context models and agent systems become popular, efficient KV cache management will become an essential component, and LMCache leads the evolution of this field.