Zing Forum

Reading

FreeRelay: Programmable AI Inference Control Plane and Intelligent Routing Gateway

FreeRelay is an open-source AI inference gateway that automatically selects the optimal backend between free and paid LLM providers via intelligent routing, supporting task complexity detection, circuit breaker protection, budget prediction, and multi-step execution DAGs.

AI网关智能路由LLM提供商OpenAI兼容断路器预算控制多步骤DAG可观测性任务复杂度检测
Published 2026-04-02 05:12Recent activity 2026-04-02 05:24Estimated read 7 min
FreeRelay: Programmable AI Inference Control Plane and Intelligent Routing Gateway
1

Section 01

FreeRelay: Open-source AI Inference Gateway with Smart Routing Between Free & Paid LLM Providers

FreeRelay is an open-source AI inference gateway designed to solve key pain points in AI application development. It enables automatic selection of optimal backends between free and paid LLM providers via smart routing, supporting features like task complexity detection, circuit breaker protection, budget prediction, and multi-step execution DAGs. Its core value lies in reducing cost waste, avoiding rate limit issues, and improving reliability without code changes for applications.

2

Section 02

Background: Pain Points in AI Inference Today

With the booming LLM ecosystem, developers face challenges: fragmented free AI services (Groq, Google AI Studio, OpenRouter etc.) with varying API formats, rate limits, and reliability; 429 rate limit errors causing pipeline stagnation; overconsumption of paid credits on simple tasks that could use free services. These issues drive the need for an intelligent gateway like FreeRelay.

3

Section 03

Core Architecture & Operational Modes

FreeRelay offers three modes: Free (only free providers, budget-sensitive), Paid (only OpenAI/Anthropic, high quality), Auto (default, uses free first then switches to paid for complex tasks). Its architecture includes: client communication via OpenAI-compatible API; gateway internal steps (request validation, task complexity detection, smart routing, circuit breaker, budget prediction). Routing decisions consider success probability, quality score, latency/cost/security, tenant policies, circuit state, budget health, and UCB exploration rewards.

4

Section 04

Task Complexity Detection & Smart Routing Logic

FreeRelay's core strength is fine-grained task analysis: each request is profiled in <5ms across 10 dimensions (task family, depth, precision, latency category, context topology, tool needs, deterministic requirements, security level, output contract, economic constraints) without LLM calls (low overhead). Context optimizer sorts historical records for high-value content and rewrites prompts per provider features. Routing engine uses expected utility formula to select best provider-model combo. Policy DSL supports complex rules (prioritize/exclude providers, limit temperature, enable hedging etc.).

5

Section 05

Free & Paid Provider Ecosystem

Free providers: Groq (llama-3.1, mixtral-8x7b, 30 RPM, fast), Google (gemini-1.5-flash,15 RPM, large context), OpenRouter (llama-3.1, mistral-7b,20 RPM, rich models), Together AI (llama-3.1, qwen2,60 RPM, batch-friendly), Mistral (mistral-small, multilingual), NVIDIA (llama-3.1, mixtral,40 RPM, GPU-optimized). Paid providers: OpenAI (gpt-4o, gpt-4o-mini, comprehensive), Anthropic (claude-3.5-sonnet, long context). This layered design balances cost and quality without code changes.

6

Section 06

Advanced Features: DAG Workflows & Elasticity

FreeRelay supports multi-step execution DAGs (replacing single requests) with chained components (classifiers, generators, validators, tools etc.) and conditional transitions (e.g., validation failure). Validation uses layered approach (structure, semantic, async judge). Elasticity features: circuit breaker (CLOSED/HALF_OPEN/OPEN), EWMA budget prediction, AIMD congestion control, semantic cache (MinHash+LSH for duplicate prompts). Observability: Prometheus metrics, OpenTelemetry tracing, structured logs (pattern pass rate, retry categories etc.).

7

Section 07

Platform Integration & Deployment Options

Integrations: Continue.dev (simple config), LangChain (OpenAI adapter), Node.js/TypeScript (OpenAI SDK), OpenWebUI (API base URL change), OpenClaw (deep integration for cost optimization), OpenCode/Codex CLI (coding assistant backend). Deployment: 1-line pip install (pip install -e .; freerelay), Docker Compose (FreeRelay + Redis + Jaeger + Prometheus + Grafana for production observability), CLI tools (status check, benchmarking).

8

Section 08

v3 MAX Spec & Roadmap

FreeRelay is based on v3 MAX inference spec (control/data plane separation, Redis schema, workload profiling, routing audit, utility math, DAG engine etc.). Roadmap: Phase1 (1-5 days: OpenAI wire format, provider adapters, streaming, circuit breaker, budget prediction); Phase2 (6-10 days: profiler, utility routing, semantic cache, context pipeline, validation/repair); Phase3 (11-14 days: DAG engine, control plane learning, observability dashboard, Docker stack, docs/CI).