Zing Forum

Reading

ITQ3_S: A Ternary Interleaved Quantization Technique Based on Rotational Domain Smoothing for High-Precision 3-Bit Large Model Inference

ITQ3_S pre-rotates the weight space using the Fast Walsh-Hadamard Transform (FWHT) to disperse the energy of outliers across the entire vector, achieving perplexity performance close to FP16 while providing over 1.5x the throughput of 4-bit alternatives on the RTX 5090.

quantization3-bitFWHTLLM inferenceTurboQuantNVIDIA RTX 5090model compression
Published 2026-03-30 08:03Recent activity 2026-03-31 11:19Estimated read 7 min
ITQ3_S: A Ternary Interleaved Quantization Technique Based on Rotational Domain Smoothing for High-Precision 3-Bit Large Model Inference
1

Section 01

[Introduction] ITQ3_S: A High-Precision Breakthrough Solution for 3-Bit Large Model Inference

ITQ3_S is a ternary interleaved quantization technique based on rotational domain smoothing. Its core uses the Fast Walsh-Hadamard Transform (FWHT) to pre-rotate the weight space, dispersing the energy of outliers across the entire vector to achieve perplexity performance close to FP16. Meanwhile, on the NVIDIA RTX 5090, its throughput reaches over 1.5x that of 4-bit alternatives, providing a balanced solution of high precision and high performance for low-bit inference of large models.

2

Section 02

Research Background and Quantization Dilemmas

Large Language Models (LLMs) have high deployment costs, with memory usage and computational overhead increasing exponentially with scale during inference. Quantization is a core model compression method, but traditional 3-bit quantization faces catastrophic precision loss due to heavy-tailed weight distributions and inter-channel outliers. Existing solutions often simply truncate outliers, damaging the model's expressive power and introducing error accumulation. Balancing extreme compression and model fidelity has become an industry focus.

3

Section 03

Core Technical Architecture of ITQ3_S

Rotational Domain Adaptive Quantization Strategy

ITQ3_S introduces the TurboQuant (TQ) rotational domain adaptive quantization strategy, which pre-rotates the weight space based on FWHT to redistribute the energy of outliers concentrated in specific channels across the entire vector space, transforming the heavy-tailed distribution into a near-Gaussian distribution that is more suitable for uniform ternary encoding.

Zero-Error Round-Trip Fidelity

The team derived a strict dequantization process, using a 256-point inverse FWHT to accurately restore the transformation during CUDA shared memory loading, ensuring zero-error round-trip between offline quantization and online inference. The reconstruction error is only determined by the ternary quantization grid and is strictly smaller than the uniform 3-bit baseline under the same bit budget.

4

Section 04

Hardware Co-optimization Design

Interleaved Memory Layout

An interleaved memory layout is adopted to maximize hardware utilization, enabling high parallelization of DP4A instructions and Tensor Core scheduling while reducing memory bandwidth bottlenecks. On the RTX 5090, it maintains perplexity comparable to FP16 while achieving over 1.5x the throughput of 4-bit solutions.

Feasibility of Consumer Hardware Deployment

ITQ3_S targets consumer hardware scenarios. The RTX 5090, as a high-end gaming graphics card with high cost-effectiveness, can achieve near-full-precision inference quality locally, allowing small and medium teams to run large models without expensive cloud services.

5

Section 05

Experimental Validation and Performance Analysis

Perplexity Comparison

ITQ3_S achieves perplexity comparable to the FP16 baseline on multiple benchmark datasets. Traditional 3-bit solutions typically cause a 10-20% increase in perplexity, but ITQ3_S effectively suppresses this degradation through rotational domain smoothing.

Throughput Improvement

The 1.5x throughput improvement comes from architecture-level optimizations: interleaved layout reduces bank conflicts, FWHT fusion lowers kernel launch overhead, and efficient Tensor Core scheduling ensures full utilization of computing units—not just bandwidth savings from bit width ratios.

6

Section 06

Technical Significance and Application Prospects

ITQ3_S marks the transition of quantization technology from "empirical parameter tuning" to "mathematics-driven". Through theoretical analysis and hardware co-design, it proves that high-quality inference can still be achieved at extremely low bit widths.

Application prospects include:

  • Edge Deployment: Running larger models on memory-constrained devices
  • Real-Time Applications: Reducing response time in latency-sensitive scenarios such as dialogue systems
  • Cost Optimization: Reducing infrastructure investment while maintaining service quality

This research sets a benchmark for subsequent quantization work, provides valuable ideas for combining mathematical transformations with hardware characteristics, and is expected to become a standard component for LLM inference optimization.