Zing Forum

Reading

HIT EMNLP 2025 Paper Open-Sourced: Bayesian Optimization-Driven LLM Activation Sparsity Acceleration Framework

The research team from Harbin Institute of Technology (Shenzhen) has open-sourced the WAS framework. By leveraging weight-aware activation sparsity and constrained Bayesian optimization scheduling, it significantly accelerates large language model (LLM) inference without retraining. This method has been accepted by EMNLP 2025.

激活稀疏性大语言模型加速贝叶斯优化EMNLP 2025哈尔滨工业大学无需训练推理优化Transformer模型压缩TPE优化
Published 2026-04-03 19:14Recent activity 2026-04-03 19:20Estimated read 9 min
HIT EMNLP 2025 Paper Open-Sourced: Bayesian Optimization-Driven LLM Activation Sparsity Acceleration Framework
1

Section 01

【Introduction】HIT EMNLP 2025 Open-Sourced WAS Framework: LLM Activation Sparsity Acceleration Scheme Without Retraining

The research team from Harbin Institute of Technology (Shenzhen) has open-sourced the WAS (Weight-Aware Activation Sparsity) framework. This method significantly accelerates large language model (LLM) inference without retraining by using weight-aware activation sparsity and constrained Bayesian optimization scheduling. The related achievement has been accepted by EMNLP 2025. The WAS framework combines weight-aware strategies, component-level greedy optimization, and inter-layer TPE Bayesian optimization to balance efficiency and accuracy, providing a new path for LLM inference optimization.

2

Section 02

Research Background: Computational Bottlenecks in Large Model Inference and Challenges of Activation Sparsity

Computational Bottlenecks in Large Model Inference

As the parameter scale of large language models grows from billions to trillions, inference cost has become a key challenge for practical deployment. Traditional compression methods like quantization and pruning often require expensive retraining, and simple activation pruning struggles to balance efficiency and accuracy.

Prospects and Challenges of Activation Sparsity

Activation sparsity leverages the characteristic that many activation values in neural networks are close to zero, skipping zero-value computations to accelerate inference. However, the core challenge is how to intelligently decide which activations to zero out, maximizing sparsity rate while not harming model performance.

3

Section 03

Core Innovations of the WAS Framework: Three Strategies to Break Through Sparsity Optimization Bottlenecks

The WAS framework proposes three key innovations:

1. Weight-Aware Sparsity Strategy: Combines the magnitude of activation values with the importance of corresponding weights to ensure that the sparsified activations have minimal impact on outputs, maintaining accuracy at high sparsity rates.

2. Component-Level Greedy Optimization: Decomposes Transformer layers into components like Q/K/V projections and MLP layers, optimizes the sparsity rate of each component independently, and uses a greedy algorithm to find Pareto optimal solutions.

3. Inter-Layer TPE Optimization: Uses Tree-structured Parzen Estimator (TPE) Bayesian optimization to fine-tune the distribution of inter-layer sparsity rates, efficiently exploring the high-dimensional space to find globally better configurations.

4

Section 04

Technical Implementation: Complete Workflow from Activation Collection to TPE Optimization

The implementation of WAS is divided into three phases:

1. Activation Collection and Histogram Generation: Collects activation distributions of each layer through forward propagation and generates histogram statistics, providing a data foundation for sparsity decisions.

2. Greedy Optimization Phase: Determines the optimal sparsity rate for each component based on activation statistics, with the goal of maximizing sparsity rate while keeping perplexity increase within a threshold.

3. Inter-Layer TPE Optimization: The TPE optimizer performs fine adjustments at the layer level, considering inter-layer dependencies to further optimize sparsity configurations.

In addition, the project includes custom Triton kernels to implement sparse matrix operations, ensuring that theoretical acceleration translates into actual inference speed improvements.

5

Section 05

Experimental Validation: Achieving Significant Inference Acceleration While Maintaining Performance

The research team validated the effectiveness of WAS on Llama and Mistral series models:

  • Performance Preservation: On the WikiText-2 benchmark, the perplexity of the sparse model is close to that of the dense model; most of the original capabilities are retained in downstream tasks (question answering, reasoning, code generation).

  • Advantage of No Retraining: Users can convert a pre-trained model to a sparse version in a few minutes without expensive GPU fine-tuning, making it suitable for resource-constrained scenarios and rapid deployment.

6

Section 06

Open-Source Ecosystem: Complete Implementation and Convenient User Experience

The WAS project provides a complete open-source implementation, including core modules, custom kernels, evaluation tools, and ready-to-use scripts. The project has a clear structure and comprehensive documentation; users can complete the entire workflow from activation collection to model evaluation via simple bash scripts.

The code is built based on TEAL and Optuna, and follows the Apache 2.0 license, facilitating academic reproduction and industrial applications.

7

Section 07

Application Implications: Facilitating Efficient Deployment of LLMs in Multiple Scenarios

WAS provides new ideas for efficient LLM inference and is of significant value in the following scenarios:

  • Edge Device Deployment: Reduces latency and energy consumption on mobile devices and edge servers.

  • High-Throughput Services: Enables cloud services to serve more users with the same hardware.

  • Real-Time Applications: Latency-sensitive applications like dialogue systems benefit from faster inference speeds.

8

Section 08

Limitations and Future Directions: Exploration of Attention Optimization and Hardware Collaboration

WAS has limitations: currently, it mainly focuses on activation sparsity of feedforward networks, and there is still room for improvement in attention mechanism optimization; collaborative optimization between sparse patterns and specific hardware architectures remains to be explored.

Future directions: dynamic sparsity strategies (input-adaptive adjustment of sparsity rates), joint optimization with quantization methods, dedicated sparsity schemes for long text processing, etc.