Zing Forum

Reading

vLLM Ascend Plugin: Natively Run Large Model Inference on Huawei Ascend NPUs

vllm-ascend is an officially supported Huawei Ascend NPU hardware plugin by the vLLM community. It enables efficient inference of large models on domestic AI chips via a hardware pluggable architecture, supporting multiple model types such as MoE, Embedding, and multimodal models.

vLLM华为昇腾NPU大模型推理硬件插件国产芯片AscendAI基础设施
Published 2026-03-31 09:13Recent activity 2026-03-31 09:22Estimated read 5 min
vLLM Ascend Plugin: Natively Run Large Model Inference on Huawei Ascend NPUs
1

Section 01

【Main Floor/Introduction】vLLM Ascend Plugin: Native Support for Large Model Inference on Ascend NPUs

vllm-ascend is an officially supported Huawei Ascend NPU hardware plugin by the vLLM community. It enables efficient inference of large models on domestic AI chips via a hardware pluggable architecture, supporting multiple model types such as MoE, Embedding, and multimodal models. It fills the gap of the Ascend platform in the vLLM ecosystem and provides important support for the construction of the domestic AI chip ecosystem.

2

Section 02

Background: Pain Points and Needs of the Domestic AI Chip Ecosystem

vLLM, as a high-performance open-source inference framework, is known for its PagedAttention technology and continuous batching mechanism, but it has long been built around the NVIDIA CUDA ecosystem. Huawei Ascend NPUs have excellent computing power and energy efficiency ratio, but their popularity is affected by the maturity of the software ecosystem. The emergence of the vllm-ascend project aims to fill this gap.

3

Section 03

Methodology: Design Philosophy of the Hardware Pluggable Architecture

vllm-ascend is developed based on the vLLM hardware pluggable architecture specification, with the core being decoupling hardware-related operators, memory management, etc., from the core logic. This architecture brings benefits such as maintenance independence (plugin updates do not modify core code), version compatibility, functional equivalence (consistent with CUDA backend functions), and convenient community collaboration.

4

Section 04

Evidence: Hardware/Model Support and Core Optimizations of the Plugin

Hardware Support: Atlas 800I A2/A3 inference servers, Atlas A2/A3 training servers, Atlas300I Duo (experimental); Model Types: Transformer-based models (LLaMA/Qwen, etc.), MoE models (DeepSeek-MoE, etc.), Embedding models, multimodal models; Core Optimizations: Adaptation of PagedAttention (dynamic KV Cache management), continuous batching (reducing tail latency), expert parallelism (supporting deployment of ultra-large-scale MoE models).

5

Section 05

Evidence: Easy Deployment and Open Community Governance

Deployment and Usage: Can be installed via pip install vllm-ascend, and the usage is consistent with standard vLLM (code examples: loading models to generate text, starting OpenAI API server); Community Governance: Dual-branch strategy (main branch tracks latest features, releases branch for stable maintenance), weekly online community meetings every Wednesday, user case showcases (integration with tools like LLaMA-Factory).

6

Section 06

Conclusion and Outlook: Promoting the Maturity of Domestic AI Infrastructure

vllm-ascend not only achieves technical adaptation but also serves as a model for domestic AI chips to integrate into the global software stack. Future continuous iterations will help enterprises/developers deploy large models on the Ascend platform with lower thresholds, promoting the maturity and popularization of domestic AI infrastructure.