Zing Forum

Reading

SpecBlock: Block-iterative Speculative Decoding Combining Path Dependency and Low-cost Drafting

This paper proposes the SpecBlock framework, which reduces drafting costs while maintaining path dependency through a block-iterative drafting mechanism and dynamic tree construction strategy. Compared to EAGLE-3, it achieves an 8-13% speedup and only incurs 44-52% of the drafting cost.

推测解码块迭代路径依赖推理加速动态树构建成本感知优化
Published 2026-05-08 12:59Recent activity 2026-05-11 12:21Estimated read 8 min
SpecBlock: Block-iterative Speculative Decoding Combining Path Dependency and Low-cost Drafting
1

Section 01

Core Introduction to the SpecBlock Framework: A Block-iterative Solution to the Dilemma of Speculative Decoding

Title: SpecBlock: Block-iterative Speculative Decoding Combining Path Dependency and Low-cost Drafting

This paper proposes the SpecBlock framework, aiming to solve the dilemma between high cost of autoregressive draftors and high rejection rate of parallel draftors in speculative decoding technology. Through a block-iterative drafting mechanism and dynamic tree construction strategy, the framework significantly reduces drafting costs while maintaining path dependency. Experiments show that compared to EAGLE-3, SpecBlock achieves an 8-13% speedup with only 44-52% of the drafting cost; when cost-aware adaptation is enabled, the advantage further expands to 11-19%.

2

Section 02

The Dilemma of Speculative Decoding: Trade-off Between Autoregressive and Parallel Drafting

The Dilemma of Speculative Decoding

Speculative decoding is an important technology to accelerate large language model (LLM) inference. It reduces generation latency by drafting a candidate continuation tree and verifying it in a single pass. However, existing draftors face a trade-off: autoregressive draftors (e.g., EAGLE-3) maintain path dependency but require calling the draftor for each layer of the tree, leading to high costs; parallel draftors reduce the number of calls but lack mutual awareness in position prediction, resulting in increased verification rejection rates. How to reduce drafting costs while maintaining path dependency is a key bottleneck currently.

3

Section 03

Core Innovations of SpecBlock: Block-iterative Drafting and Path Dependency Transfer Mechanism

Block-iterative Design and Path Dependency Transfer of SpecBlock

The core innovation of SpecBlock is the "block-iterative" drafting mechanism: each forward pass of the draftor generates K interdependent positions forming a "block", and the tree grows through block expansion instead of token-by-token. Intra-block position dependency is maintained (similar to the advantage of autoregressive), while block-level iteration limits the number of calls (similar to the advantage of parallel).

To maintain inter-block path dependency, SpecBlock adopts a dual transfer mechanism: intra-block uses inter-layer offsets to pass the hidden state of the previous position to each decoding layer; inter-block allows the new block to start from any position of the previous block and inherit the hidden state to continue the path, ensuring path coherence and high acceptance rate.

4

Section 04

Dynamic Tree Construction and Cost-aware Adaptation: Optimizing Verification Resources and Deployment Efficiency

Dynamic Tree Construction and Cost-aware Adaptation

SpecBlock introduces a collaboratively trained ranking head instead of a fixed top-k structure, dynamically allocating branch budgets based on position acceptance probabilities and prioritizing resources to positions with high acceptance probabilities.

In addition, a cost-aware bandit mechanism is deployed: using free feedback from the validator, the draftor is updated only when the expected throughput gain exceeds the update cost, achieving adaptive optimization for the operating environment.

5

Section 05

Training Optimization: Effective Prefix Masking Strategy

Training Optimization: Effective Prefix Masking Strategy

During the training phase, SpecBlock adopts an effective prefix masking strategy: when an earlier position is predicted incorrectly, the loss calculation for subsequent positions is automatically masked. This design avoids training the draftor on error prefixes that would not be generated in inference, improving training efficiency and model quality.

6

Section 06

Experimental Results: Performance Comparison and Advantage Verification of SpecBlock

Experimental Results and Performance Comparison

Experiments show that compared to EAGLE-3, SpecBlock achieves an average speedup increase of 8-13% with only 44-52% of the drafting cost; when cost-aware adaptation is enabled, the advantage expands to 11-19%. The results verify the effectiveness of the block-iterative design, and dynamic tree construction and cost-aware adaptation further explore the optimization space.

7

Section 07

Implications for LLM Inference Optimization: The Value of Balancing Dependency and Parallelism

Implications for LLM Inference Optimization

SpecBlock successfully proves that fine-grained architectural design can find a balance between conflicting optimization goals, and the block-iterative concept may extend to other scenarios involving trade-offs between dependency and parallelism.

The cost-aware adaptation mechanism demonstrates the potential of dynamic optimization during deployment. As LLM applications become more diverse, the value of adaptive systems will become increasingly prominent.

8

Section 08

Limitations and Future Directions: Adaptive Block Size and Efficient Dynamic Tree Exploration

Limitations and Future Directions

Current limitations of the scheme: the choice of block size has a significant impact on performance, and the optimal value varies by task/model; the complexity of dynamic tree construction may become a bottleneck in large-scale scenarios. Future directions can explore adaptive block size strategies and more efficient dynamic tree algorithms.