Zing Forum

Reading

FerrisRes: A Next-Generation LLM Inference Engine Built Entirely with Rust, Ditching Python Dependencies

FerrisRes is an LLM inference and training engine written entirely in Rust. It uses the innovative Block AttnRes architecture to achieve linear time complexity, supports cross-platform GPU acceleration, and completely eliminates Python dependencies.

Rust大语言模型推理引擎Block AttnResTransformerwgpu跨平台GPU加速量化KV缓存
Published 2026-04-12 17:38Recent activity 2026-04-12 17:52Estimated read 6 min
FerrisRes: A Next-Generation LLM Inference Engine Built Entirely with Rust, Ditching Python Dependencies
1

Section 01

FerrisRes: A Next-Generation LLM Inference Engine Built Entirely with Rust (Introduction)

FerrisRes is an LLM inference and training engine written entirely in Rust. Its core innovation is the Block AttnRes architecture (linear time complexity), supporting cross-platform GPU acceleration (compatible with Vulkan/Metal/DX12/WebGPU via wgpu), and completely ditching Python dependencies. It aims to solve problems in the Python ecosystem such as GIL limitations, dynamic type risks, and complex dependency chains, making it suitable for edge devices, cross-platform deployment, and resource-constrained environments.

2

Section 02

Background: Pain Points of the Python-Dominated LLM Ecosystem

Currently, the LLM ecosystem is almost entirely dependent on Python, which has three major issues: 1. Python's GIL limits parallel computing; 2. The dynamic type system increases the risk of runtime errors; 3. Complex dependency chains (C extensions, CUDA kernels, etc.) make deployment and distribution difficult, especially on edge devices or different operating systems. These pain points gave birth to the Rust-native FerrisRes engine.

3

Section 03

Core Technology: Analysis of the Block AttnRes Architecture

The traditional Transformer self-attention has O(n²) complexity, leading to obvious performance bottlenecks in long sequence processing. Block AttnRes achieves O(n) linear complexity through a two-layer attention structure:

  1. Intra-block Attention: Divides the sequence into fixed-size blocks (default 8 tokens), runs multi-head self-attention + RoPE within each block to generate block representations;
  2. Inter-block Attention: Performs attention operations on block representations, with complexity O(n/block_size). This design balances local details and global context.
4

Section 04

Cross-Platform Support and Complete Toolchain

Cross-Platform GPU: Integrates the wgpu library, supporting desktop GPUs (Vulkan/DX12), Apple Silicon (Metal), integrated graphics cards, and web browsers (WebGPU), enabling 'write once, run anywhere'. Complete Toolchain:

  • Inference: TokenGenerator (supports generate/stream/RAG/tool calls), Logit processor chain (repetition penalty, temperature adjustment, etc.), context extension (YaRN/StreamingLLM);
  • Training: Automatic differentiation engine, GPU-side SGD/Adam optimizers, LoRA adapters, gradient checkpointing and CPU offloading (trainable on 8GB iGPU).
5

Section 05

Memory Optimization and Compute Shaders

Memory Optimization:

  • TurboQuant: 2-bit quantization compression, reducing KV cache memory by 16x;
  • HullKVCache: Convex hull attention with O(logn) lookup complexity;
  • ToMe: CPU binary soft matching to reduce visual tokens;
  • Gradient checkpointing + offloading: Supports training on resource-constrained devices. Compute Shaders: 13 WGSL shaders (e.g., Tiled MatMul, RMSNorm, Softmax, RoPE, FlashDecode, etc.), optimized for different hardware.
6

Section 06

Usage Examples and Development Progress

Concise API: Provides Rust API, supporting basic generation, streaming generation, RAG, and other scenarios (code examples omitted). Development Status:

  • Completed: wgpu basics, Block AttnRes, training/inference functions, TurboQuant, LoRA, RAG, etc.;
  • Mostly completed: Vision (implicit GEMM, ToMe);
  • Planned: Distributed multi-GPU training, tensor parallelism, weight loading (safetensors/GGUF). License: Dual license (AGPL-3.0 and commercial).
7

Section 07

Conclusion: A New Direction for LLM Infrastructure

FerrisRes represents a diversified evolution direction for LLM infrastructure. It does not aim to replace PyTorch/TensorFlow but provides a better choice for edge deployment, cross-platform applications, and resource-constrained environments. It is valuable for developers (Rust's safety and performance), researchers (innovative architecture), and enterprises (reducing operation and maintenance costs). Although it is in the early stage, it is expected to become a new standard for LLM deployment in the future.