Zing Forum

Reading

SLICE: An SLO-Driven LLM Inference Scheduling Framework for Edge Computing

An LLM inference scheduling solution specifically designed for edge computing scenarios, supporting differentiated Service Level Objective (SLO) requirements and optimizing resource allocation for latency-sensitive and throughput-prioritized tasks.

边缘计算LLM推理调度框架SLO服务质量资源优化实时推理
Published 2026-04-10 12:10Recent activity 2026-04-10 12:17Estimated read 6 min
SLICE: An SLO-Driven LLM Inference Scheduling Framework for Edge Computing
1

Section 01

Introduction: SLICE—An SLO-Driven LLM Inference Scheduling Framework for Edge Computing

SLICE is an LLM inference scheduling framework specifically designed for edge computing scenarios. Its core goal is to address the differentiated Service Level Objective (SLO) requirements of latency-sensitive tasks (e.g., real-time dialogue) and throughput-prioritized tasks (e.g., batch document processing) in resource-constrained edge environments. The framework takes SLO as the core of scheduling decisions and optimizes resource utilization and service quality through strategies such as dynamic resource allocation and edge scenario adaptation.

2

Section 02

Background: Core Challenges of LLM Inference in Edge Computing

With the deployment of LLMs on edge devices, inference scheduling faces three major challenges: 1. Resource constraints in edge environments; 2. Need to serve two types of requests simultaneously—latency-sensitive (requiring low-latency responses) and throughput-prioritized (pursuing high throughput); 3. Traditional one-size-fits-all scheduling strategies struggle to meet differentiated needs.

3

Section 03

Core Design: SLO-Driven Differentiated Scheduling Strategy

Differentiated SLO Support

Allows setting multi-dimensional SLO metrics for different requests: latency SLO (e.g., p99 latency ≤500ms), throughput SLO (e.g., 100 requests processed per second), resource SLO (e.g., VRAM usage ≤8GB).

Dynamic Resource Allocation

Adjusts resource allocation through priority queues (graded by SLO urgency), preemption mechanisms (high-priority tasks preempt low-priority resources), and batch processing optimization (improving GPU utilization).

4

Section 04

Technical Architecture: Four Core Components Supporting Scheduling Decisions

SLICE framework includes four key components:

  1. SLO Parser: Converts user SLOs into internal constraints, supporting expressions like absolute thresholds and percentages;
  2. Resource Monitor: Monitors GPU VRAM, compute unit utilization, request queue length, and historical latency distribution in real time;
  3. Scheduling Decision Engine: Based on status and SLO constraints, determines request execution order, batch size, resource allocation, and model optimization strategies (e.g., quantization, KV cache compression);
  4. Feedback Controller: Adjusts strategies in a closed loop based on execution results and alerts to SLO violation risks.
5

Section 05

Edge Adaptation and Application Scenarios

Edge Scenario Adaptation

  • Heterogeneous hardware support: Adapts to devices like NVIDIA Jetson and ARM architectures via an abstraction layer;
  • Power-aware scheduling: Balances performance and power consumption;
  • Network fluctuation adaptation: Supports local caching and offline inference to handle network interruptions.

Application Scenarios

Applicable to scenarios such as smart retail (real-time consultation + sales report generation), industrial quality inspection (real-time defect detection + batch data analysis), and intelligent transportation (real-time event recognition + traffic statistics).

6

Section 06

Comparative Advantages and Practical Significance

Comparison with Traditional Solutions

Feature Traditional Solutions SLICE
SLO Awareness Limited or None Core Design
Differentiated Service Simple Priority Multi-dimensional SLO
Edge Adaptation Requires Modification Native Support
Dynamic Adjustment Static Configuration Real-time Feedback Control

Practical Significance

SLICE provides scheduling infrastructure for edge AI deployment, which can be combined with technologies like KV cache compression and model quantization to improve efficiency, offering a reference for deploying production-level LLM services in resource-constrained environments.