Zing Forum

Reading

ForkKV: Scaling Multi-LoRA Agent Services via Copy-on-Write KV Cache Separation

ForkKV draws inspiration from the operating system's fork mechanism. Through its DualRadixTree architecture and ResidualAttention kernel, it separates the KV cache of multi-LoRA agent services into shared and lightweight dedicated parts, achieving a 3x throughput improvement.

ForkKVLoRAKV缓存多智能体写时复制大模型推理优化模型服务系统
Published 2026-04-08 02:52Recent activity 2026-04-09 10:03Estimated read 5 min
ForkKV: Scaling Multi-LoRA Agent Services via Copy-on-Write KV Cache Separation
1

Section 01

ForkKV: Core Breakthrough for Scaling Multi-LoRA Agent Services

ForkKV draws inspiration from the operating system's fork mechanism. By separating the KV cache into shared and lightweight dedicated parts via copy-on-write, combined with its DualRadixTree architecture and ResidualAttention kernel, it solves the memory bottleneck of multi-LoRA agent services and achieves a maximum 3x throughput improvement.

2

Section 02

Background: Memory Bottleneck in Multi-Agent Workflows

Large language model (LLM) services are shifting toward multi-agent collaboration. LoRA technology allows specialized agents to coexist on a single base model, but the activation of LoRA for each agent causes KV cache divergence, rendering traditional prefix caching ineffective. This forces the system to maintain redundant copies, leading to rapid GPU memory saturation and reduced throughput.

3

Section 03

Core Design of ForkKV: Architecture and Kernel

Core Innovation: Inspired by the operating system's fork and copy-on-write mechanisms, it separates the KV cache into shared components (prefix context common to all agents) and dedicated components (unique states from agent LoRA activation). New agents inherit the shared cache instantly, and copying is only triggered when modifications are made.

DualRadixTree Architecture: The main RadixTree manages shared cache indexes, while the slave RadixTree maintains agent-specific incremental views. The cost of agent creation is reduced to pointer operations.

ResidualAttention Kernel: It loads shared and dedicated KV caches in blocks into GPU on-chip SRAM, dynamically splices and reconstructs complete tensors, and uses LoRA's low-rank properties to decompose computations, minimizing data movement overhead.

4

Section 04

Experimental Evaluation: Significant Performance Improvement

Evaluated on LLMs of different scales and datasets, ForkKV outperforms existing multi-LoRA service systems in the following aspects:

  • Up to 3.0x higher throughput
  • Negligible impact on generation quality
  • Supports more concurrent agents with the same GPU memory
  • Performance advantages become more obvious as the number of agents increases

This validates its effectiveness in solving the memory bottleneck.

5

Section 05

Technical Insights and Future Outlook

Insights: Cross-domain technology migration (from operating system memory management to LLM services) can bring breakthrough improvements, and a computing system perspective is important for solving LLM deployment issues.

Future Directions:

  1. Extend to multi-level cache hierarchies on CPU/disk
  2. Implement dynamic scaling of agents with lightweight fork
  3. Optimize the ResidualAttention kernel for AI accelerators

ForkKV will facilitate the construction of large-scale multi-agent collaboration systems.