Zing Forum

Reading

MUXQ: Mixed-to-Unified Matrix Quantization via Low-Rank Outlier Decomposition

This article introduces the MUXQ quantization method, which addresses the outlier problem in large model quantization by detecting outlier channels in activations and introducing an auxiliary matrix to reallocate outlier magnitudes. It achieves INT8 quantization accuracy close to FP16 on the GPT-2 series models.

模型量化异常值分解INT8量化端侧部署NPU加速MUXQ
Published 2026-04-06 22:13Recent activity 2026-04-07 15:49Estimated read 5 min
MUXQ: Mixed-to-Unified Matrix Quantization via Low-Rank Outlier Decomposition
1

Section 01

MUXQ: A New Method for High-Precision INT8 Quantization via Low-Rank Outlier Decomposition

This article introduces the MUXQ quantization method, which targets the outlier problem in large model quantization. By detecting outlier channels in activations and introducing a low-rank auxiliary matrix to reallocate outlier magnitudes, it overcomes the limitations of existing methods. This method achieves INT8 quantization accuracy close to FP16 on the GPT-2 series models, maintains a unified computation structure, and is suitable for edge NPU deployment acceleration.

2

Section 02

Outlier Dilemma in Large Model Quantization and Limitations of Existing Methods

Edge deployment of large models requires INT8 quantization to leverage NPU hardware optimizations. However, outlier channels in activations (a few channels with magnitudes much larger than others) will amplify the quantization scale and compress the precision of normal values. Existing methods like LLM.int8() use mixed precision which breaks the computation graph; SmoothQuant only transfers quantization difficulty; ZeroQuant suffers from accuracy loss. None of these methods fundamentally solve the outlier problem.

3

Section 03

Core Idea of MUXQ: Proactive Reallocation of Outliers

MUXQ introduces a low-rank auxiliary matrix to spread the magnitudes of outlier channels to more channels, diluting the impact of outliers. Its advantages include: maintaining a unified INT8 computation structure (hardware-friendly), negligible overhead from the low-rank auxiliary matrix, and compatibility with other quantization techniques for synergistic optimization.

4

Section 04

Key Steps in MUXQ Technical Implementation

  1. Outlier channel detection: Identify outlier channels by calculating activation statistics (maximum value, quantiles) for each channel; 2. Low-rank auxiliary matrix design: Adopt the U*V^T low-rank form to linearly transform and spread outlier magnitudes; 3. Joint optimization: End-to-end learning of auxiliary matrix parameters to minimize quantization loss while constraining low-rank complexity.
5

Section 05

Experimental Validation of MUXQ: Accuracy Close to FP16 with Controllable Overhead

On the GPT-2 series (0.1B/0.3B/0.7B) and WikiText-2 dataset, MUXQ outperforms naive quantization: under per-tensor INT8 quantization, its accuracy is close to FP16 (small perplexity gap); the latency increase caused by the low-rank auxiliary matrix is acceptable; compared to LLM.int8(), it maintains a unified computation graph, and it solves the outlier problem more thoroughly than SmoothQuant.

6

Section 06

Technical Significance and Application Value of MUXQ

MUXQ breaks through the bottleneck of edge deployment, making INT8 quantization feasible; the unified computation graph fully leverages NPU acceleration; its modular design is easy to integrate into existing frameworks; the low-rank idea can be extended to other models and layers, promoting the popularization of large models on edge devices.

7

Section 07

Limitations of MUXQ and Future Research Directions

Currently, it has only been validated on GPT-2; further validation is needed for larger models (7B/13B); auxiliary matrix learning requires calibration data, and scenarios with extreme data constraints need to be studied. Future directions: more efficient outlier detection, adaptive rank selection, and extension to other neural network architectures.