Zing Forum

Reading

CacheGen Reproduction Project: Efficient KV Cache Compression and Streaming Transmission Scheme for Large Model Services

CacheGen is a reproduction project of KV cache compression technology optimized for large model inference. It achieves end-to-end cache transmission acceleration through int8 quantization compression and vLLM integration.

KV缓存大模型推理vLLM量化压缩分布式推理性能优化
Published 2026-04-17 23:14Recent activity 2026-04-17 23:23Estimated read 5 min
CacheGen Reproduction Project: Efficient KV Cache Compression and Streaming Transmission Scheme for Large Model Services
1

Section 01

Introduction / Main Floor: CacheGen Reproduction Project: Efficient KV Cache Compression and Streaming Transmission Scheme for Large Model Services

CacheGen is a reproduction project of KV cache compression technology optimized for large model inference. It achieves end-to-end cache transmission acceleration through int8 quantization compression and vLLM integration.

2

Section 02

Background: Performance Bottlenecks in Large Model Inference

During the inference process of large language models, each generation step requires processing the Key-Value (KV) cache of all previous tokens. As the conversation length increases, the video memory occupied by the KV cache grows linearly, which not only limits the maximum context length the model can handle but also becomes a major performance bottleneck in multi-turn dialogue scenarios. When KV cache needs to be transmitted between different computing nodes (e.g., in distributed inference or prefix cache sharing scenarios), the huge data volume leads to severe network latency.

CacheGen is a technical solution proposed to address this problem. It significantly reduces transmission overhead while maintaining model output quality by efficiently compressing KV cache and adopting streaming processing in network transmission. This GitHub repository is an open-source reproduction implementation of the CacheGen paper.

3

Section 03

Project Architecture and Core Components

This reproduction project includes three main modules, forming a complete end-to-end processing flow:

4

Section 04

KV Extraction Module (kv_extraction_hf)

Based on the HuggingFace Transformers framework, it implements offline extraction of KV tensors for causal language models. This module can load mainstream open-source large models (such as OPT, Mistral, etc.), capture and save the Key and Value tensors of each layer during inference, and provide raw data for subsequent compression processing.

5

Section 05

Codec Module (encoder/decoder)

It implements an int8 quantization-based serialization compression pipeline, including:

  • Quantizing floating-point KV tensors into 8-bit integer representations
  • Generating necessary scaling factors (scale) as auxiliary data
  • Providing corresponding decoders for dequantization and restoration

This compression strategy can significantly reduce the size of KV cache while maintaining model output quality.

6

Section 06

vLLM Integration Module (third_party/vllm)

The project integrates a modified vLLM inference engine via the vendor method, enabling true end-to-end benchmarking capabilities. The CacheGenConnector component is responsible for handling cache loading and transmission logic in vLLM's asynchronous inference engine, supporting the testing of compression effects in real inference scenarios.

7

Section 07

Benchmarking and Evaluation Capabilities

The project provides a complete benchmarking framework run_benchmarks.py, which can compare performance differences between baseline and cachegen modes:

8

Section 08

Testing Dimensions

  • Time to First Token (TTFT): Time from request to the first output token
  • End-to-End Latency: Time taken for the complete generation process
  • Throughput: Number of tokens generated per second
  • Compression Ratio: Size ratio between original data and compressed data
  • Network Transmission Time: Time taken for KV cache transmission across nodes
  • Decoding Time: Overhead of dequantization and restoration