Zing Forum

Reading

TENT: Declarative Data Flow Engine for Decoupled LLM Services

Modern GPU clusters use heterogeneous interconnection networks, where traditional static path selection leads to head-of-line blocking and bandwidth waste. TENT decouples transmission intent from physical execution, unifies heterogeneous interconnections into a dynamic resource pool, implements scheduling based on real-time link quality through fine-grained slicing and dynamic spraying, supports self-healing within 50ms, and achieves a 1.36x throughput increase and 26% latency reduction on H800 clusters.

解耦式架构数据传输异构网络RDMANVLink切片喷洒遥测驱动故障自愈
Published 2026-04-01 09:29Recent activity 2026-04-02 09:54Estimated read 6 min
TENT: Declarative Data Flow Engine for Decoupled LLM Services
1

Section 01

TENT: Declarative Data Flow Engine for Decoupled LLM Services (Introduction)

Modern GPU clusters adopt heterogeneous interconnection networks, where traditional static path selection leads to head-of-line blocking and bandwidth waste. TENT decouples transmission intent from physical execution, unifies heterogeneous interconnections into a dynamic resource pool, and achieves fault self-healing within 50ms by combining fine-grained slice spraying and telemetry-driven scheduling. On H800 clusters, TENT achieves a 1.36x throughput increase and 26% latency reduction compared to existing solutions, providing a high-performance data transmission solution for decoupled LLM services.

2

Section 02

Network Challenges of Decoupled LLM Services and Limitations of Existing Solutions

The decoupled LLM architecture distributes model components across multiple nodes, requiring frequent transmission of data such as activation values and KV caches. However, the heterogeneous networks (RDMA, NVLink, etc.) in modern GPU clusters pose link orchestration challenges. Existing solutions use static path binding, which has issues like state-blind striping, communication silos, head-of-line blocking, and operational fragility, making them unable to dynamically adapt to network changes.

3

Section 03

Core Design Philosophy of TENT: Decoupling Intent from Execution

TENT draws on SDN (Software-Defined Networking) ideas to separate transmission intent (the application layer declares "what to transmit") from physical execution (how to transmit). First, it abstracts heterogeneous interconnections into a unified resource pool, tracks performance metrics of each channel in real time, and achieves global resource visibility, dynamic load balancing, and seamless failover.

4

Section 04

Slice Spraying and Telemetry-Driven Orchestration

The core innovation of TENT is "slice spraying": it decomposes elephant flows into fine-grained slices and dynamically allocates them to optimal links based on real-time telemetry data (bandwidth, latency, queue depth, etc.). Telemetry-driven orchestration supports congestion prediction, adaptive spraying, fault detection, and performance attribution, fundamentally solving the head-of-line blocking problem.

5

Section 05

Sub-50ms Transparent Fault Self-Healing

TENT has fast self-healing capability: after identifying abnormal links via telemetry, it completes path recalculation, slice rerouting, and state synchronization within 50ms, with no perception from the application layer. This simplifies application development and improves system reliability.

6

Section 06

Production Deployment and Performance Evaluation

TENT has been deployed in industrial LLM inference and RL (Reinforcement Learning) pipelines. In H800 cluster tests: in LLM inference scenarios, throughput increased by 1.36x and P90 TTFT (Time To First Token) decreased by 26%; in RL pipelines, parameter update speed increased by 20-26%. Compared to solutions like Mooncake TE and NIXL, TENT has significant advantages in high-load heterogeneous environments.

7

Section 07

Technical Insights and Future Directions

The technical insights of TENT include the value of declarative interfaces, the power of fine-grained scheduling, the necessity of real-time telemetry, and the engineering significance of self-healing capabilities. Future directions include exploring topology-aware optimization, application-layer collaboration interfaces, multi-tenant security isolation, and adaptation to more heterogeneous hardware.

8

Section 08

Application Prospects and Conclusion

TENT can be applied to scenarios such as large-scale LLM inference, distributed training, and RL infrastructure. It addresses key limitations in heterogeneous networks, achieves performance improvements and transparent self-healing, and provides a reference architecture pattern for AI infrastructure.