Zing Forum

Reading

Token Scout: A Real-Time LLM Model Discovery and Routing Tool for AI Agents

Token Scout is a real-time model discovery tool designed for AI agents. It supports querying over 28 free models and provides compatibility filtering, cost control, and quota tracking features. It integrates with AI agent clients like Claude Code and OpenClaw via the MCP protocol, enabling agents to automatically select the most suitable model based on task requirements.

AI智能体LLM模型发现MCP协议成本优化OpenRouterOllama模型路由免费推理Claude Code
Published 2026-04-06 07:15Recent activity 2026-04-06 07:24Estimated read 5 min
Token Scout: A Real-Time LLM Model Discovery and Routing Tool for AI Agents
1

Section 01

Introduction / Main Floor: Token Scout: A Real-Time LLM Model Discovery and Routing Tool for AI Agents

Token Scout is a real-time model discovery tool designed for AI agents. It supports querying over 28 free models and provides compatibility filtering, cost control, and quota tracking features. It integrates with AI agent clients like Claude Code and OpenClaw via the MCP protocol, enabling agents to automatically select the most suitable model based on task requirements.

2

Section 02

Background and Problems

In AI agent development, a common pain point is the rigidity of model selection. Many agents hardcode model IDs in their code, meaning they can't leverage the ever-changing free and low-cost inference resources. Currently, there are over 28 free models available just on OpenRouter, including Qwen3 Coder 480B, Nemotron 120B, and DeepSeek R1. However, these resources change daily, and hardcoded model selection makes agents unable to adapt flexibly.

Worse still, there are three major compatibility barriers between different models:

  1. Tool format fragmentation: Anthropic, OpenAI, and Ollama have different function calling methods
  2. Context window limitations: Sending 200,000 tokens to a model with a 32K context window leads to catastrophic data loss
  3. Inference label conflicts: Claude uses API-separated thinking mode, while DeepSeek R1 and Qwen3 use inline labels; mixing them damages the conversation.
3

Section 03

Token Scout's Solution

Token Scout is a real-time LLM model discovery tool that solves all the above problems. Its core design philosophy is: No proxy, no middleware, no latency tax. Token Scout only tells the agent where to call the model; the agent calls directly without going through any proxy layer.

4

Section 04

Core Features

  • Real-time model discovery: Queries OpenRouter, Groq, Cerebras, Mistral, GitHub, Google, and local Ollama instances
  • Compatibility filtering: Ensures no routing to models that would break tool calls, truncate context, or use incompatible inference formats
  • Cost control: Sets maximum cost per 1K tokens, supports free-only models, cheap models, or unrestricted mode
  • Quota tracking: Tracks requests and token consumption per provider, filters out models with exhausted quotas
5

Section 05

Technical Architecture

Token Scout uses a three-layer discovery mechanism:

6

Section 06

Layer 1: OpenRouter Real-Time Discovery

Queries all available models and real-time pricing via the OpenRouter API. Free models change hourly, and Token Scout captures these changes in real time.

7

Section 07

Layer 2: Ollama Constellation Discovery

Detects Ollama instances running in the local network and inventories loaded models. Supports multi-host configuration:

  • OLLAMA_HOST - Local Ollama (default:127.0.0.1)
  • MARS_HOST - Additional host
  • GALAXY_HOST - GPU inference host
  • LUNAR_HOST - Lightweight inference host
  • EXPLORA_HOST - Heavy computing host (multi-GPU, nginx load balancing)
8

Section 08

Layer 3: Static Fallback

When real-time discovery is unavailable, uses a curated list of known free-tier providers.