Zing Forum

Reading

dotLLM: A .NET Native Large Language Model Inference Engine Built from Scratch

dotLLM is an LLM inference engine built entirely from scratch using C# and the .NET tech stack, without relying on llama.cpp or Python libraries. It supports multiple Transformer architectures, provides CPU SIMD optimization and CUDA GPU acceleration, and implements advanced features such as PagedAttention, speculative decoding, and constrained decoding.

.NETLLM推理C#CUDA量化推理投机解码约束解码OpenAI APIGGUFSIMD优化
Published 2026-04-16 17:12Recent activity 2026-04-16 17:19Estimated read 6 min
dotLLM: A .NET Native Large Language Model Inference Engine Built from Scratch
1

Section 01

dotLLM: Core Guide to the .NET Native LLM Inference Engine

dotLLM is an LLM inference engine built entirely from scratch using C# and the .NET tech stack, without relying on llama.cpp or Python libraries. It supports multiple Transformer architectures, provides CPU SIMD optimization and CUDA GPU acceleration, and implements advanced features such as PagedAttention, speculative decoding, and constrained decoding. Led by .NET MVP Konrad Kokosa, it demonstrates the potential of .NET in high-performance computing scenarios.

2

Section 02

Project Background and Motivation

Most open-source solutions in the AI inference field are based on C/C++ or Python ecosystems. dotLLM chooses a pure .NET stack to build a production-grade inference engine from scratch, with the core design philosophy of "Native .NET"—all model loading, tokenization, sampling, and computation logic are implemented in pure C#. GPU acceleration is achieved by directly loading PTX kernels via the CUDA Driver API, without relying on external native libraries, providing .NET developers with possibilities for deep customization and integration.

3

Section 03

Technical Architecture and Performance Optimization Highlights

Layered Architecture

Adopts a clear layered architecture, with each component as an independent NuGet package: DotLLM.Core (core abstractions), DotLLM.Models (multi-architecture model loading), DotLLM.Tokenizers (support for multiple tokenizers), DotLLM.Cpu/Cuda (CPU/GPU backends), DotLLM.Engine (inference engine), DotLLM.Server (OpenAI-compatible API).

Performance Optimization

  • Zero-GC inference: Unmanaged memory allocation, no managed heap allocation in hot paths
  • SIMD vectorization: Uses TensorPrimitives and Intrinsics to implement vectorized computations like quantized matrix multiplication
  • Memory-mapped loading of GGUF files: Load multi-GB models in milliseconds
  • Full support for GGUF quantization formats such as FP16, Q8_0, Q4_K_M
  • CUDA backend: Loads PTX kernels via Driver API, supports cuBLAS HGEMM pre-filling
4

Section 04

Implementation of Advanced Features

  • Paged KV Cache: Block-based memory management, including shared block pool, block table, reference counting, and copy-on-write
  • Speculative Decoding: Draft-verify-accept loop, supports greedy mode fast path and KV cache rollback
  • Constrained Decoding: JSON mode (FSM ensures syntax), JSON Schema validation, regular expressions (DFA masking), GBNF grammar constraints
  • OpenAI-compatible API: Provides interfaces like /v1/chat/completions, supports streaming SSE responses, tool calls, and web chat interface
5

Section 05

Usage Methods and Development Roadmap

Usage Methods

  1. Global .NET tool: Install via dotnet tool, supports model pulling, running, and service startup
  2. Standalone binary: Download self-contained version from GitHub Releases
  3. Library reference: Reference specific NuGet packages in projects for deep integration

Development Roadmap

Stages 1-6 (end-to-end generation, practical inference, CPU performance, GPU acceleration, constrained decoding and API, improved services) have been completed. Stage 7 (diagnostics and interpretability: logprobs, hook system, logit lens, etc.) is in progress.

6

Section 06

Practical Significance and Impact

  • Proves that the .NET ecosystem can support high-performance computing for AI inference
  • For .NET developers: Seamless integration with existing applications, easy customization with pure C# code, flexible Native AOT deployment, MIT license friendly to enterprises
  • For the AI community: Provides a reference for non-mainstream tech stack implementations, with architectural decisions and optimization techniques of reference value
7

Section 07

Project Summary

dotLLM is an ambitious and well-executed project that challenges the inherent perception that "AI must use Python/C++". Although it is in the preview stage, it has already demonstrated the core capabilities of a production-grade inference engine and is an important milestone in the AI capability building of the .NET ecosystem.