Zing Forum

Reading

BPDQ: Variable Quantization Grid Technology Based on Bit-Plane Decomposition Enables Large Language Models to Maintain High Performance at 2-Bit Precision

The ICML 2026 accepted paper BPDQ proposes a breakthrough post-training quantization (PTQ) method that constructs variable quantization grids via bit-plane decomposition. It significantly outperforms traditional PTQ methods in 2-3 bit low-precision scenarios, achieving an 83.85% GSM8K accuracy for Qwen2.5-72B on a single RTX 3090.

大语言模型量化后训练量化PTQ位平面分解低比特推理模型压缩ICML 2026
Published 2026-05-16 11:41Recent activity 2026-05-16 11:47Estimated read 6 min
BPDQ: Variable Quantization Grid Technology Based on Bit-Plane Decomposition Enables Large Language Models to Maintain High Performance at 2-Bit Precision
1

Section 01

BPDQ: A Breakthrough Method for High-Performance Inference of Large Models at 2-Bit Low Precision

The ICML 2026 accepted paper BPDQ proposes a variable quantization grid technology based on bit-plane decomposition, which is a breakthrough post-training quantization method. This method significantly outperforms traditional PTQ methods in 2-3 bit low-precision scenarios, achieving an 83.85% GSM8K accuracy for Qwen2.5-72B on a single RTX 3090, providing a new path for large model deployment in low-resource scenarios.

2

Section 02

Memory Bottlenecks in Large Model Inference and Limitations of Traditional PTQ

With the expansion of parameter scales in large language models, inference memory usage and bandwidth requirements have become core challenges for deployment. Qwen2.5-72B requires over 140GB of VRAM at 16-bit floating-point precision, far exceeding the capacity of consumer-grade GPUs. Post-training quantization (PTQ) is favored for not requiring retraining, but traditional PTQ leads to a sharp drop in model quality at 2-3 bit precision, limiting deployment in low-resource scenarios.

3

Section 03

Limitations of Shape Invariance in Fixed Quantization Grids

Existing PTQ methods force each weight group to use shape-invariant quantization grids (e.g., UINT2 uniform distribution), which cannot adapt to complex weight distributions, leading to amplified quantization errors in low-bit scenarios. Researchers face a dilemma: either high-bit with large memory overhead, or low-bit with severe accuracy loss.

4

Section 04

Core Innovation of BPDQ: Variable Quantization Grid Design

Bit-Plane Decomposition Quantization (BPDQ) constructs variable quantization grids via bit-plane decomposition and scalar coefficients, breaking through the constraints of shape invariance. It decomposes weights into multiple bit planes (each carrying different information), dynamically combines scalar coefficients to make the grid adapt to data distribution, expands the feasible solution space, and maintains consistency with the Hessian-induced geometric structure.

5

Section 05

Algorithm Mechanism: Iterative Optimization and Error Compensation

BPDQ adopts an iterative optimization strategy, using approximate second-order information (Hessian matrix) to adjust bit-plane coefficients, and minimizes output layer differences through a progressive error compensation mechanism to ensure the downstream task performance of the quantized model. The paper's appendix provides convergence analysis, proving the theoretical stability and consistency of the process.

6

Section 06

Experimental Evidence: Breakthrough Performance at 2-Bit Precision

BPDQ performs excellently in multiple benchmark tests: After 2-bit quantization of Qwen2.5-72B, the peak VRAM on a single RTX3090 is only 22.69GB, and the GSM8K accuracy reaches 83.85% (only a 7-percentage-point drop compared to the 90.83% at 16-bit precision); Llama-2-7B also achieves satisfactory performance in 2/3-bit configurations, and the checkpoints have been released on the Hugging Face Hub.

7

Section 07

Engineering Implementation and Theoretical Contributions

In engineering, BPDQ is integrated as a patch into GPTQModel v5.7.0, compatible with the existing quantization ecosystem, providing complete quantization-evaluation workflow scripts (supporting C4 calibration and lm-evaluation-harness evaluation), flexible YAML configuration, and supporting eval_only mode. Theoretically, it proves that variable quantization grids expand the feasible solution set, and the quantization process is consistent with the Hessian-induced geometry, which has been recognized by ICML 2026.

8

Section 08

Conclusion and Outlook: A New Paradigm for Low-Bit Quantization

BPDQ breaks the fixed grid assumption, achieves a balance between performance and efficiency in 2-3 bit scenarios, enables large models to run on consumer-grade hardware, and opens up possibilities for local inference on edge computing/mobile devices. We look forward to more quantization innovations that balance theoretical depth and engineering practicality in the future.