# BPDQ: Variable Quantization Grid Technology Based on Bit-Plane Decomposition Enables Large Language Models to Maintain High Performance at 2-Bit Precision

> The ICML 2026 accepted paper BPDQ proposes a breakthrough post-training quantization (PTQ) method that constructs variable quantization grids via bit-plane decomposition. It significantly outperforms traditional PTQ methods in 2-3 bit low-precision scenarios, achieving an 83.85% GSM8K accuracy for Qwen2.5-72B on a single RTX 3090.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-16T03:41:55.000Z
- 最近活动: 2026-05-16T03:47:40.374Z
- 热度: 159.9
- 关键词: 大语言模型, 量化, 后训练量化, PTQ, 位平面分解, 低比特推理, 模型压缩, ICML 2026
- 页面链接: https://www.zingnex.cn/en/forum/thread/bpdq-2
- Canonical: https://www.zingnex.cn/forum/thread/bpdq-2
- Markdown 来源: floors_fallback

---

## BPDQ: A Breakthrough Method for High-Performance Inference of Large Models at 2-Bit Low Precision

The ICML 2026 accepted paper BPDQ proposes a variable quantization grid technology based on bit-plane decomposition, which is a breakthrough post-training quantization method. This method significantly outperforms traditional PTQ methods in 2-3 bit low-precision scenarios, achieving an 83.85% GSM8K accuracy for Qwen2.5-72B on a single RTX 3090, providing a new path for large model deployment in low-resource scenarios.

## Memory Bottlenecks in Large Model Inference and Limitations of Traditional PTQ

With the expansion of parameter scales in large language models, inference memory usage and bandwidth requirements have become core challenges for deployment. Qwen2.5-72B requires over 140GB of VRAM at 16-bit floating-point precision, far exceeding the capacity of consumer-grade GPUs. Post-training quantization (PTQ) is favored for not requiring retraining, but traditional PTQ leads to a sharp drop in model quality at 2-3 bit precision, limiting deployment in low-resource scenarios.

## Limitations of Shape Invariance in Fixed Quantization Grids

Existing PTQ methods force each weight group to use shape-invariant quantization grids (e.g., UINT2 uniform distribution), which cannot adapt to complex weight distributions, leading to amplified quantization errors in low-bit scenarios. Researchers face a dilemma: either high-bit with large memory overhead, or low-bit with severe accuracy loss.

## Core Innovation of BPDQ: Variable Quantization Grid Design

Bit-Plane Decomposition Quantization (BPDQ) constructs variable quantization grids via bit-plane decomposition and scalar coefficients, breaking through the constraints of shape invariance. It decomposes weights into multiple bit planes (each carrying different information), dynamically combines scalar coefficients to make the grid adapt to data distribution, expands the feasible solution space, and maintains consistency with the Hessian-induced geometric structure.

## Algorithm Mechanism: Iterative Optimization and Error Compensation

BPDQ adopts an iterative optimization strategy, using approximate second-order information (Hessian matrix) to adjust bit-plane coefficients, and minimizes output layer differences through a progressive error compensation mechanism to ensure the downstream task performance of the quantized model. The paper's appendix provides convergence analysis, proving the theoretical stability and consistency of the process.

## Experimental Evidence: Breakthrough Performance at 2-Bit Precision

BPDQ performs excellently in multiple benchmark tests: After 2-bit quantization of Qwen2.5-72B, the peak VRAM on a single RTX3090 is only 22.69GB, and the GSM8K accuracy reaches 83.85% (only a 7-percentage-point drop compared to the 90.83% at 16-bit precision); Llama-2-7B also achieves satisfactory performance in 2/3-bit configurations, and the checkpoints have been released on the Hugging Face Hub.

## Engineering Implementation and Theoretical Contributions

In engineering, BPDQ is integrated as a patch into GPTQModel v5.7.0, compatible with the existing quantization ecosystem, providing complete quantization-evaluation workflow scripts (supporting C4 calibration and lm-evaluation-harness evaluation), flexible YAML configuration, and supporting eval_only mode. Theoretically, it proves that variable quantization grids expand the feasible solution set, and the quantization process is consistent with the Hessian-induced geometry, which has been recognized by ICML 2026.

## Conclusion and Outlook: A New Paradigm for Low-Bit Quantization

BPDQ breaks the fixed grid assumption, achieves a balance between performance and efficiency in 2-3 bit scenarios, enables large models to run on consumer-grade hardware, and opens up possibilities for local inference on edge computing/mobile devices. We look forward to more quantization innovations that balance theoretical depth and engineering practicality in the future.
