Zing Forum

Reading

TurboQuant Benchmark: Evaluation of KV Cache Compression Performance on Apple Silicon

Introduces the turboquant-bench project, which provides a complete benchmarking solution for comparing standard FP16 and TurboQuant compressed inference on Apple Silicon platforms. It demonstrates up to 39% speed improvement and a 5x memory compression ratio in long-context scenarios.

KV cache compressionApple SiliconMLXbenchmarkTurboQuantmemory optimizationedge inferencequantization
Published 2026-03-28 23:44Recent activity 2026-03-29 01:15Estimated read 9 min
TurboQuant Benchmark: Evaluation of KV Cache Compression Performance on Apple Silicon
1

Section 01

Introduction / Main Post: TurboQuant Benchmark: Evaluation of KV Cache Compression Performance on Apple Silicon

Introduces the turboquant-bench project, which provides a complete benchmarking solution for comparing standard FP16 and TurboQuant compressed inference on Apple Silicon platforms. It demonstrates up to 39% speed improvement and a 5x memory compression ratio in long-context scenarios.

2

Section 02

Project Background and Research Motivation

With the continuous growth of context lengths in large language models, the memory footprint of KV cache has become a key bottleneck in inference optimization. This issue is particularly prominent on devices with unified memory architectures like Apple Silicon—where the GPU and CPU share the same memory block, the expansion of KV cache directly limits the size of models that can be run and the maximum context length.

As an advanced KV cache compression technique, TurboQuant theoretically reduces memory usage significantly by compressing key-value pairs from FP16 to 2-3 bits. However, do the computational overheads from compression and decompression affect actual inference speed? Can the output quality after compression remain consistent with the original precision? These questions require systematic benchmarking to answer.

The turboquant-bench project was created precisely for this purpose. It provides an out-of-the-box benchmarking framework that allows developers to compare the performance differences between standard FP16 inference and TurboQuant compressed inference on Apple Silicon devices with one click, providing data support for technology selection and optimization.

3

Section 03

Three Comparison Modes

The benchmarking framework implements three KV cache management modes to form a complete comparison dimension:

Standard Mode: Uses FP16 precision for keys and values. This is the default configuration of MLX-LM, serving as a quality benchmark and performance reference. This mode does not perform any compression, maintaining the highest numerical precision but with the largest memory footprint.

MLX-Quantized Mode: Uses MLX-LM's built-in QuantizedKVCache, which quantizes both keys and values to 4 bits. This is a native quantization solution in the Apple Silicon ecosystem, serving as a competitor comparison for TurboQuant.

TurboQuant Mode: Adopts a combined strategy of 3-bit key compression and 2-bit value compression. This is the core test subject of the project, integrating multiple advanced technologies such as random rotation, Lloyd-Max codebook, and QJL symbol sketch.

4

Section 04

Evaluation Metrics System

The project establishes a multi-dimensional evaluation metric system to comprehensively measure the impact of compression technology:

Generation Throughput (Gen tok/s): The number of tokens generated per second, reflecting inference speed. This is the most direct indicator of user experience.

Peak Memory (Peak Mem MB): The maximum memory usage during inference, measuring resource efficiency.

Token Match Rate (Token match %): The percentage of token-level matches between compressed output and standard output, measuring semantic consistency of the output.

Character Match Rate (Char match %): Character-level text similarity, providing a more granular quality assessment.

5

Section 05

Test Results for 32B Model

Tests on the Qwen2.5-32B-Instruct model show TurboQuant's significant advantages in long-context scenarios:

Context Length Standard FP16 TurboQuant 3-bit Speed Ratio Quality Match
Short (36 tokens) 6.7 tok/s 6.8 tok/s 101% 100%
Medium (690 tokens) 7.0 tok/s 7.2 tok/s 103% 100%
Long (1130 tokens) 6.9 tok/s 9.6 tok/s 139% 100%
Long Generation (500 tokens) 6.9 tok/s 6.7 tok/s 97% 100%

Key Finding: In long-context scenarios (1130 tokens), TurboQuant achieves a 39% speed improvement. This is because larger models are more sensitive to memory bandwidth—compressed KV cache reduces memory access overhead, which in turn improves overall throughput.

6

Section 06

Test Results for 3B Model

Tests on the smaller Qwen2.5-3B-Instruct model show a different pattern:

Context Length Standard FP16 TurboQuant 3-bit Speed Ratio Quality Match
Short (36 tokens) 70.6 tok/s 69.1 tok/s 98% 100%
Medium (690 tokens) 61.6 tok/s 69.4 tok/s 113% 100%
Long (1130 tokens) 59.4 tok/s 63.2 tok/s 106% 100%
Long Generation (500 tokens) 56.0 tok/s 54.7 tok/s 98% 100%

For small models, TurboQuant still maintains advantages in medium and long-context scenarios, but the magnitude is less obvious than for large models. This is because small models have lower computational density, so the memory bandwidth bottleneck is not as prominent as in large models.

7

Section 07

Memory Compression Effect

TurboQuant's memory-saving effect is equally impressive:

Token Count FP16 Cache TurboQuant 3-bit Compression Ratio
4,096 512 MB 113 MB 4.5x
16,384 2,048 MB 413 MB 5.0x

As the context length increases, the compression ratio rises from 4.5x to 5x. This is because TurboQuant uses fixed-overhead techniques like group quantization, which are amortized over longer sequences.

8

Section 08

Quality Consistency Verification

In all test scenarios, TurboQuant's output achieves 100% token-level matching with standard FP16 inference. This means that under greedy decoding (temperature=0), compression does not introduce any semantic drift, and the output quality is completely consistent with the original precision.