Zing Forum

Reading

HyperP: Hypersphere Optimization Framework Reshapes Large Model Scaling Laws

Microsoft team proposes the HyperP framework, which achieves transferable learning rates across models of different scales via hypersphere parameterization. It delivers a 1.58x improvement in computational efficiency under 6e21 FLOPs and ensures training stability

大语言模型超球面优化模型扩展训练稳定性混合专家模型Muon优化器机器学习系统深度学习
Published 2026-03-31 01:51Recent activity 2026-03-31 11:51Estimated read 5 min
HyperP: Hypersphere Optimization Framework Reshapes Large Model Scaling Laws
1

Section 01

[Introduction] HyperP Framework: Hypersphere Optimization Reshapes Large Model Scaling Laws

The Microsoft team proposes the HyperP hypersphere optimization framework, which achieves transferable learning rates across models of different scales via hypersphere parameterization. It delivers a 1.58x improvement in computational efficiency under 6e21 FLOPs and ensures training stability. This framework addresses the limitations of existing scaling laws, combining hypersphere optimization with scaling law research to provide a new paradigm for large model scaling. It also introduces the SqrtGate mechanism to optimize mixture-of-experts models, which has far-reaching implications for AI infrastructure development.

2

Section 02

Background: Limitations of Existing Large Model Scaling Laws

Current mainstream scaling laws are based on first-order optimizers like AdamW. While they reveal the patterns of performance changes with computational volume and parameter count, they cannot guarantee the stability of large-scale training (e.g., random issues like loss surges and gradient explosions). Additionally, hyperparameters need to be recalculated for each model configuration, lacking transferability, leading to high risks and costs in large-scale training.

3

Section 03

Methodology: HyperP Framework and Core of Hypersphere Optimization

The core contributions of the HyperP framework include: 1. Theoretical breakthrough: Weight decay under hypersphere constraints is a first-order no-op, simplifying hyperparameter tuning; 2. Depth-μP remains necessary in hypersphere optimization to ensure consistent training dynamics across models of different depths; 3. The power-law exponent of the optimal learning rate with respect to data volume remains at 0.32, consistent with AdamW. Additionally, the SqrtGate mechanism is proposed to maintain consistent RMS outputs across different granularities in mixture-of-experts models, allowing for larger load-balancing weights.

4

Section 04

Experimental Validation: Dual Improvements in Efficiency and Stability

Experiments cover models from billions to hundreds of billions of parameters. Results show that under a 6×10^21 FLOPs budget, HyperP improves computational efficiency by 1.58x compared to the Muon optimizer. Stability metrics (Z-scores, output RMS, activation outliers) remain bounded and non-increasing, achieving transferable stability that can be migrated from small-scale experiments to large-scale training.

5

Section 05

Significance: Impact on AI Infrastructure Development

HyperP reduces the risks and costs of large-scale training, simplifies the hyperparameter tuning process (small-scale tuning can be migrated to large-scale), provides tools for scaling mixture-of-experts models, drives large models toward the trillion-parameter era, and becomes an important component of infrastructure development.

6

Section 06

Open Source and Community Contributions

The HyperP training code has been open-sourced and hosted in the GitHub repository (https://github.com/microsoft/ArchScale), facilitating the reproduction of paper results and community improvements, and accelerating the development of the hypersphere optimization field.

7

Section 07

Conclusion: HyperP Unlocks a New Paradigm for Large Model Scaling

HyperP combines hypersphere optimization with scaling law research, delivering both theoretical and practical benefits, and provides a reliable path for large model scaling. Paper link: http://arxiv.org/abs/2603.28743v1.