Zing Forum

Reading

LLMhop: A Zero-Dependency Lightweight LLM Inference Routing Gateway

LLMhop is a minimalist, stateless HTTP router written in Go, designed specifically for OpenAI-compatible LLM inference backends. It intelligently distributes requests across multiple single-model inference servers, enabling a lightweight gateway solution with zero external dependencies and single-binary deployment.

LLM推理API网关Go语言vLLMsglangOpenAI兼容无状态架构零依赖
Published 2026-05-11 16:43Recent activity 2026-05-11 16:55Estimated read 8 min
LLMhop: A Zero-Dependency Lightweight LLM Inference Routing Gateway
1

Section 01

LLMhop: Introduction to the Zero-Dependency Lightweight LLM Inference Routing Gateway

LLMhop is a minimalist, stateless HTTP router written in Go, designed specifically for OpenAI-compatible LLM inference backends. It intelligently distributes requests across multiple single-model inference servers, providing a lightweight gateway solution with zero external dependencies and single-binary deployment, addressing the pain point of unified management for multi-model inference services.

2

Section 02

Project Background and Problem Definition

With the development of LLM inference technology, enterprises and developers opt for private deployment of open-source models (e.g., vLLM, sglang). However, these engines are mostly designed for single-model, single-process operation. When multiple model services need to be provided simultaneously, traditional solutions rely on complex load balancers or API gateways, leading to operational complexity and resource overhead. LLMhop aims to solve this problem and provide a minimalist, stateless HTTP routing solution.

3

Section 03

Core Design Philosophy and Workflow

Core Design Principles

  • Minimalism: Implemented purely in Go with no third-party dependencies; compiled into a single static binary for easy deployment.
  • Stateless Architecture: No persistent state maintained; supports horizontal scaling and can run behind load balancers.
  • Model-Aware Routing: Parses the model field in OpenAI API requests and forwards to the corresponding backend.
  • Zero External Dependencies: Built on Go's standard library, reducing the attack surface and simplifying supply chain audits.

Request Handling Workflow

  1. Receive client OpenAI-compatible API requests
  2. Extract the model field from the request body
  3. Look up the backend URL in the configuration based on the model name
  4. Forward the request to the target backend
  5. Return the backend's response to the client

Security Features

  • Optional Bearer token authentication (constant-time comparison to prevent timing attacks)
  • Request body size limit (100MiB by default, adjustable)
  • Sensitive configurations support reading from environment variables/files to avoid hardcoding.
4

Section 04

Deployment Methods and Configuration Example

Deployment Methods

  1. Native Binary: llmhop --config config.json
  2. Nix Package Manager: nix run github:mirkolenz/llmhop -- --config config.json
  3. Docker Container: docker run --rm -p 8080:8080 -v ./config.json:/config.json ghcr.io/mirkolenz/llmhop --config /config.json
  4. NixOS Module: Provides out-of-the-box hardened systemd service configuration, supporting DynamicUser and sandbox protection.

Configuration Example

A complete JSON configuration includes the listening port, authentication token, request body size limit, and model-backend mappings. It supports replacing sensitive information with environment variables (${env:NAME}) or files (${file:path}).

5

Section 05

Compatibility and Performance Characteristics

Compatibility

Supports multiple OpenAI-compatible backends:

  • Self-hosted engines: vLLM, sglang, TabbyAPI, Aphrodite, Ollama, LocalAI
  • Hosted services: OpenRouter, together.ai, DeepInfra
  • Commercial APIs: OpenAI, Anthropic (via compatibility layer)

Performance

  • Memory Usage: Depends on maxBodyBytes and the number of concurrent requests; typically within a few hundred MB under standard configurations.
  • Latency Overhead: Minimal, including JSON parsing, configuration lookup, and network forwarding; the increase in end-to-end latency is negligible.
6

Section 06

Applicable Scenarios and Limitations

Applicable Scenarios

  • Multi-model inference services: Unified management of vLLM/sglang instances for multiple models like Llama/Qwen
  • Hybrid cloud deployment: Simultaneous use of private models and external APIs (e.g., OpenAI GPT-4)
  • Development and testing environments: Quickly set up multi-model environments without complex gateways

Limitations

  • Request body buffering: The entire request body needs to be buffered in memory; adjust maxBodyBytes for heavy loads
  • No load balancing: Each model supports only a single backend URL
  • No caching mechanism: Duplicate requests are fully forwarded
  • Go version requirement: Requires Go 1.21 or higher.
7

Section 07

Future Directions and Summary

Future Development

  • WebSocket support: Optimize streaming response proxying
  • Dynamic configuration reloading: Update configurations without restarting
  • Metrics and monitoring: Built-in Prometheus metrics
  • Rate limiting: Protect backend resources

Summary

LLMhop is a well-designed lightweight LLM inference routing gateway that addresses the pain points of multi-model deployment through its zero-dependency, stateless architecture. It is suitable for organizations and developers needing to manage multiple single-model backends. Its NixOS integration and hardened systemd configuration provide security best practices for production environments.