Zing Forum

Reading

Thaw: A Snapshot Technology That Speeds Up LLM Inference Cold Start by 17x

A snapshot/restore system optimized for LLM inference, enabling fast capture and restoration of GPU states via Rust+CUDA, with support for KV cache persistence and multi-GPU tensor parallelism.

LLM推理快照恢复冷启动优化vLLMCUDARustKV缓存张量并行GPU优化
Published 2026-04-15 00:43Recent activity 2026-04-15 01:00Estimated read 7 min
Thaw: A Snapshot Technology That Speeds Up LLM Inference Cold Start by 17x
1

Section 01

Introduction / Main Post: Thaw: A Snapshot Technology That Speeds Up LLM Inference Cold Start by 17x

A snapshot/restore system optimized for LLM inference, enabling fast capture and restoration of GPU states via Rust+CUDA, with support for KV cache persistence and multi-GPU tensor parallelism.

2

Section 02

The Cold Start Dilemma in Large Model Deployment

In the production deployment of large language models (LLMs), cold start time is a long-neglected but impactful performance bottleneck. When you start a vLLM service to load a large model like Llama-3-70B, it may take nearly 10 minutes to complete weight loading, GPU memory allocation, and KV cache initialization. This delay is almost unacceptable in cloud environments requiring rapid scaling or edge computing scenarios.

The Thaw project was born to address this pain point. Through its innovative snapshot/restore mechanism, it reduces the cold start time of Llama-3-70B on dual A100 from 546 seconds to 31.8 seconds—achieving 17.2x acceleration. This is not just a numerical optimization; it fundamentally changes the deployment paradigm of large model services.

3

Section 03

Core Performance Data

Thaw demonstrates impressive acceleration effects across different hardware configurations:

4

Section 04

Large-Scale Model (Llama-3-70B, Dual A100 Tensor Parallelism)

Method Time Speedup Ratio
Normal vLLM cold start 546.5s 1x
Thaw restoration 31.8s 17.2x
Weight-only restoration 10.5s 6.74 GB/s per card
5

Section 05

Medium-Scale Model (Llama-3-8B, Single GPU)

Hardware Normal Startup Thaw Restoration Speedup Ratio Throughput
H100 SXM 20.7s 3.5s 5.9x 10.69 GB/s
RTX PRO 6000 (Blackwell) 28.6s 3.2s 8.9x -
RTX A6000 73.2s 5.8s 12.6x -

An interesting pattern: The larger the model, the more significant Thaw's acceleration effect. This is because in large models, weight loading time accounts for a higher proportion of total cold start time, and Thaw achieves acceleration precisely by optimizing the weight restoration process.

6

Section 06

Snapshot Capture Mechanism

The core innovation of Thaw lies in the complete capture of GPU states. Traditional model saving usually only stores weight files (e.g., Safetensors format), while Thaw's freeze operation captures two types of key data:

Model Weight Snapshot (.thaw file) : Contains all model parameters on the GPU, directly storing GPU memory content in binary format, avoiding serialization/deserialization overhead of traditional formats.

KV Cache Snapshot (.thawkv file) : This is Thaw's unique advantage. It captures vLLM's prefix-cached KV blocks and their hash maps. This means the restored model not only retains weights but also preserves context cache from previous inferences.

7

Section 07

Pipeline DMA Restoration

Thaw's restoration process (thaw) uses a sophisticated pipeline architecture to maximize hardware bandwidth utilization:

Step 1: Virtual Initialization . The system first quickly initializes vLLM with virtual weights, skipping time-consuming disk I/O. This step is almost instantaneous, allowing the service framework to enter the ready state immediately.

Step 2: Double-Buffered Pipeline DMA . Thaw uses two CUDA streams for pipeline transmission:

  • One stream reads snapshot data from NVMe to pinned host memory
  • The other stream asynchronously transfers data from host memory to the GPU

The two streams work in parallel, overlapping disk reading and PCIe transmission, eliminating waiting time in traditional serial processes. The O_DIRECT flag bypasses the kernel page cache to further reduce memory copy overhead.

Step 3: KV Cache Reconstruction . After weight restoration, KV cache blocks are restored to the GPU via an independent DMA channel, while the prefix cache's hash table is rebuilt. This allows new requests to hit the cache immediately, skipping expensive prefill computations.

8

Section 08

Multi-GPU Tensor Parallelism Support

Thaw fully supports multi-card tensor parallelism, a standard for large-scale model deployment. In the TP=2 configuration:

  • Snapshot Phase: Each GPU saves its own weight shard, generating weights.thaw (rank 0) and weights.rank1.thaw
  • Restoration Phase: Each card loads its own snapshot file in parallel, directly restoring local weights via RDMA or PCIe

This design ensures near-linear acceleration effects are maintained even in multi-card scenarios.