Zing Forum

Reading

Ada-MK: MegaKernel Optimization Scheme for LLM Inference on NVIDIA Ada Architecture

The Alimama team proposed the Ada-MK framework, which achieves a 23.6% increase in single-batch throughput on NVIDIA L20 through MLIR offline DAG search and shared memory optimization, marking the first successful application of MegaKernel technology in a commercial online advertising system.

LLM推理优化MegaKernelNVIDIA AdaTensorRT-LLM在线广告GPU优化
Published 2026-05-12 14:04Recent activity 2026-05-13 10:24Estimated read 6 min
Ada-MK: MegaKernel Optimization Scheme for LLM Inference on NVIDIA Ada Architecture
1

Section 01

Introduction: Ada-MK — Optimization Scheme for LLM Inference on NVIDIA Ada Architecture

The Alimama team proposed the Ada-MK framework to optimize LLM inference performance on NVIDIA Ada architecture GPUs. Through MLIR offline DAG search and shared memory optimization, this scheme achieves a 23.6% increase in single-batch throughput on NVIDIA L20, and marks the first successful application of MegaKernel technology in a commercial online advertising system, solving the strict latency problem of LLM inference in advertising scenarios.

2

Section 02

Background: LLM Inference Latency Challenges in Online Advertising Systems

When deploying LLMs for real-time inference in commercial online advertising systems, end-to-end latency must be strictly controlled within milliseconds. Generating each token in the decoding phase triggers thousands of kernel launches, whose startup overhead accounts for 14.6% of the end-to-end inference time. A slight increase in bidding latency may lead to loss of ad display opportunities and revenue, which is particularly prominent in advertising scenarios.

3

Section 03

MegaKernel Technology and Core Insights of Ada-MK

MegaKernel eliminates startup overhead and HBM round trips between operators by fusing multiple operators into a single persistent kernel. However, existing solutions have contradictions: manual tuning lacks portability, while automatic compilation introduces branch latency from runtime dynamic scheduling. The core insight of Ada-MK is: under fixed deployment configurations, the optimal execution path of MegaKernel is uniquely determined, allowing runtime decisions to be moved to compile time, and it is specifically designed for the NVIDIA Ada architecture.

4

Section 04

Three Core Technologies of Ada-MK

  1. 3D Shared Memory Constraint Model: Combining K-dimensional partitioning strategy, analyzing shared memory usage patterns, reducing peak shared memory usage by 50% to break through capacity constraints; 2. MLIR Offline DAG Search: Representing the computation graph with fine-grained DAG, completing optimal path search at compile time to eliminate runtime branches; 3. Heterogeneous Hybrid Inference Engine: Embedding TensorRT-LLM, using TRT-LLM for high throughput in the Prefill phase and Ada-MK's low-latency MegaKernel in the Decode phase, balancing throughput and latency.
5

Section 05

Experimental Results: Performance and Real-Load Validation

Evaluations on NVIDIA L20 GPU show: throughput increased by 23.6% compared to TensorRT-LLM and 50.2% compared to vLLM; in terms of latency, first-token latency is low, decoding latency is significantly reduced, and tail latency is stable; validated through real advertising system loads, adapting to short-sequence, medium-sequence, and high-concurrency scenarios.

6

Section 06

Industrial Deployment Value and Significance

Ada-MK is the first successful application of MegaKernel technology in a commercial online advertising system, proving its production feasibility. In terms of cost-effectiveness: it improves hardware efficiency, meets latency SLAs, and reduces energy consumption; in terms of scalability: it supports mainstream LLMs, is optimized for Ada and can be extended to other architectures, and integrates with TensorRT-LLM for easy operation and maintenance.

7

Section 07

Limitations and Future Directions

Current limitations: mainly targeted at NVIDIA Ada architecture, other architectures require additional adaptation; model coverage can be further expanded; assuming fixed deployment configurations, dynamic scenarios need to be explored. Future directions: adapt to more architectures, optimize specific models, explore dynamic adjustment strategies, and extend to multi-GPU scenarios.

8

Section 08

Conclusion and Paper Link

Ada-MK has made important progress in latency-sensitive commercial scenarios, balancing performance improvement and deployment feasibility, and providing practical experience for the industry. Paper link: http://arxiv.org/abs/2605.11581v1