Zing Forum

Reading

Speculative Decoding in Practice: A Complete Implementation to Accelerate LLM Inference on Apple Silicon

A concise PyTorch implementation demonstrating how to optimize inference throughput from 0.83× (greedy decoding baseline) to 1.16× using the speculative decoding algorithm on Apple M2 Max, along with a detailed analysis of key decisions and failed attempts during the optimization process.

speculative decodingLLM inferencePyTorchApple SiliconMPSdraft modelverifier model推理加速投机解码
Published 2026-05-16 10:15Recent activity 2026-05-16 10:17Estimated read 6 min
Speculative Decoding in Practice: A Complete Implementation to Accelerate LLM Inference on Apple Silicon
1

Section 01

Introduction: Core Value of Speculative Decoding Practice on Apple Silicon

The berezucc/speculative-decoding project provides an ~200-line PyTorch implementation showing how to optimize inference throughput from 0.83× to 1.16× using speculative decoding on Apple M2 Max. It includes key decisions and failed attempts during the optimization process, serving as a detailed engineering note for turning theory into practice.

2

Section 02

Core Mechanism and Principles of Speculative Decoding

Speculative decoding uses a small draft model to quickly generate candidate tokens and a large verification model to validate them in parallel, improving speed while maintaining output quality. Process: The draft model generates K candidate tokens; the verification model concatenates the context and validates in parallel; the acceptance rule follows the mathematical guarantees in the paper, ensuring the output distribution is exactly the same as greedy decoding using only the large model (bit-level identical when temperature is 0).

3

Section 03

Implementation Details and Correctness Verification

The project strictly verifies correctness—unit tests ensure that the speculative decoding output matches the pure validator's greedy decoding token by token. The code is modularly designed (speculative.py for main loop, utils.py for utility functions, benchmarks/ directory) to facilitate model replacement experiments.

4

Section 04

Five-Stage Optimization Journey: Breaking Through from 0.83× to 1.16×

Optimization is divided into five stages: 1. Basic implementation (performance below baseline); 2. Cache optimization (reduces redundant computation but fixed overhead remains a bottleneck); 3. Loop restructuring (key breakthrough, eliminates extra forward passes, first time exceeding baseline); 4. FP16 attempt (failed, MPS bottleneck is not memory bandwidth);5. torch.compile attempt (failed, MPS lacks underlying LayerNorm implementation).

5

Section 05

Performance Data and Algorithm Characteristic Analysis

On M2 Max, the optimized PyTorch implementation reaches 46.9 tok/s, which is 1.06× faster than the greedy decoding's 44.5 tok/s. The acceptance rate changes with the number of draft tokens K and temperature as expected. Analysis shows that 88% of time is spent on forward propagation, fixed overhead is ~25ms, marginal cost is 0.7-1ms, and efficiency depends on reducing the number of validator calls.

6

Section 06

Comparison with MLX Framework: The Importance of Runtime

MLX's 4-bit quantized model achieves 119.3 tok/s in greedy decoding (2.69× that of PyTorch), proving that runtime has a greater impact on performance than algorithms. Speculative decoding does not help in MLX because memory bandwidth is limited, and algorithm overhead exceeds gains. This reveals the applicable boundary: scenarios where the validator is relatively slow and bandwidth is not an absolute bottleneck.

7

Section 07

Engineering Insights and Practical Recommendations

  1. Correctness first: Use verification tools to ensure output consistency; 2. Profiling-driven optimization: Locate bottlenecks to avoid blind attempts;3. Hardware-aware design: Adjust the ratio of draft and verification models based on hardware characteristics; Record failed attempts (FP16, torch.compile) to save time on pitfalls.
8

Section 08

Conclusion: Value and Significance of the Project

This project is not just an algorithm implementation but also a detailed engineering note, showing theory-to-code conversion, iterative optimization, and honest recording of failures. It provides an invaluable reference for understanding speculative decoding and accelerating inference on resource-constrained devices.