Zing Forum

Reading

Research on Adaptive KV Cache Placement Strategy Under Hierarchical Memory Architecture

This article introduces an academic study on KV Cache memory management in large language model (LLM) inference, proposing an adaptive KV Cache placement strategy. By constructing a four-level memory simulator, the study dynamically schedules KV Cache across GPU memory (HBM), host memory (DRAM), local SSD, and remote storage, significantly reducing inference latency and memory overhead compared to static placement baselines.

大语言模型KV Cache内存优化分层存储推理加速显存管理llama.cpp
Published 2026-05-09 22:47Recent activity 2026-05-09 22:53Estimated read 8 min
Research on Adaptive KV Cache Placement Strategy Under Hierarchical Memory Architecture
1

Section 01

【Introduction】Key Points of Adaptive KV Cache Placement Strategy Research Under Hierarchical Memory Architecture

This article addresses the memory management issue of KV Cache in large language model (LLM) inference and proposes an adaptive KV Cache placement strategy. By constructing a four-level memory simulator, this strategy dynamically schedules KV Cache across GPU memory (HBM), host memory (DRAM), local SSD, and remote storage, significantly reducing inference latency and memory overhead compared to static placement baselines. The research provides an important direction for LLM inference optimization.

2

Section 02

Research Background: Memory Challenges of KV Cache

LLM inference consists of pre-filling and decoding phases. The decoding phase requires caching KV Cache to avoid redundant computations. As the context length increases, the memory usage of KV Cache grows linearly (e.g., a 70B model processing 32K tokens requires tens of GB of memory), becoming a bottleneck for long-context inference. Modern computing platforms have a hierarchical architecture including HBM (high bandwidth, low capacity), DRAM (large capacity, medium bandwidth), local SSD (large capacity, low bandwidth), and remote storage (unlimited capacity but high latency). Current inference engines (e.g., llama.cpp) use static placement strategies—when HBM is exhausted, KV Cache is offloaded entirely, leading to performance loss.

3

Section 03

Design of Adaptive KV Cache Placement Strategy

The adaptive strategy is based on three key observations: access locality (KV Cache of recent tokens has high access frequency), hierarchical storage characteristics (significant differences in bandwidth/capacity/latency across layers), and dynamic load changes (memory pressure in inference varies with generated length/concurrent requests). The strategy keeps hot data (high-frequency access) in HBM, migrates warm data to DRAM, offloads cold data to SSD or remote storage, and dynamically migrates data.

4

Section 04

Construction of Four-Level Memory Simulator

To verify the strategy's effectiveness, the study constructs a four-level memory simulator with the following functions: 1. Precisely models storage characteristics of each layer (bandwidth, capacity, latency can be parameterized and adjusted, supporting different GPU types); 2. Supports multiple KV Cache quantization schemes (FP16, Q8_0, Q4_0, etc., evaluating the impact of quantization on performance and accuracy); 3. Implements static baselines (consistent with llama.cpp) and adaptive strategies for fair comparison.

5

Section 05

Experimental Results and Analysis

Experiments were conducted in the Google Colab environment, supporting GPUs like A100, T4, L4, and using models such as SmolLM2-135M for verification. Key results: 1. Latency: The adaptive strategy significantly reduces tail latency, with smooth transition from HBM to DRAM without sudden performance degradation; 2. Memory efficiency: Supports longer contexts or higher concurrency; 3. Quantization: Q4 quantization reduces storage requirements by 75% with acceptable accuracy impact; 4. Energy consumption: Analyzes the energy consumption proportion of weight loading, KV access, and MAC computation, providing support for energy efficiency optimization.

6

Section 06

Engineering Value and Application Scenarios

For the open-source community: Provides optimization directions for inference engines like llama.cpp, which can be used as enhanced features to improve efficiency. Practical scenarios: 1. Long-context applications (e.g., RAG): Reduces latency; 2. High-concurrency services: Better utilizes GPU memory to support more users; 3. Edge devices: Intelligent offloading supports longer contexts.

7

Section 07

Research Limitations and Future Directions

Current limitations: Verified based on simulators, requiring fine-tuning for actual deployment; the adaptive strategy has additional decision overhead, which needs optimization for extreme low-latency scenarios. Future directions: 1. Predictive migration: Migrate data in advance to reduce latency; 2. Multi-request collaboration: Share KV Cache in concurrent scenarios; 3. Hardware co-design: Collaborate with vendors to explore dedicated hardware features.

8

Section 08

Research Summary

This study systematically explores KV Cache optimization strategies under hierarchical memory architecture, and through the four-level simulator, it proves that the adaptive strategy outperforms static baselines in latency, memory efficiency, and scalability. It provides theoretical analysis and experimental data for LLM inference optimization engineers and researchers, and the open-source reproduction process (Colab notebook) facilitates verification and expansion. As long-context applications become popular, KV Cache management will become a key issue, and the adaptive strategy is expected to be widely applied.