Zing Forum

Reading

EVOKORE-MCP: The MCP Routing Hub Unifying the AI Tool Ecosystem

EVOKORE-MCP is an MCP routing and multi-server aggregator built with TypeScript. It provides a unified endpoint for AI clients, integrates native workflow tools and proxy sub-servers, and supports namespace isolation, dynamic tool discovery, and human-machine collaborative approval.

MCPModel Context ProtocolAI toolsagent skillsroutingEVOKORETypeScripttool aggregationRBACHITL
Published 2026-04-11 07:42Recent activity 2026-04-11 07:47Estimated read 6 min
EVOKORE-MCP: The MCP Routing Hub Unifying the AI Tool Ecosystem
1

Section 01

EVOKORE-MCP: A Unified Routing Hub for AI Tool Ecosystems

EVOKORE-MCP is a TypeScript-built MCP (Model Context Protocol) routing and multi-server aggregator that provides a unified endpoint for AI clients. It integrates native workflow tools and proxy subservers, supporting namespace isolation, dynamic tool discovery, role-based access control (RBAC), and human-in-the-loop (HITL) approval. Its core mission is to solve the fragmentation of AI tool management by consolidating multiple MCP servers into a single interface.

2

Section 02

Background: Fragmentation in AI Tool Integration

As AI assistants expand their capabilities, tool integration has become a key bottleneck. Developers often manage multiple MCP servers (e.g., GitHub, file systems, databases, voice synthesis), each with distinct configurations, permissions, and interfaces. This fragmentation reduces efficiency and usability, creating a need for a unified solution like EVOKORE-MCP.

3

Section 03

Architecture & Core Features: Native Tools and Proxy Aggregation

EVOKORE-MCP v3.0.0 uses a layered architecture:

  • Client Layer: AI clients (Claude, Cursor, Gemini) connect via a single stdio endpoint.
  • Routing Layer: Includes SkillManager (native tools), ToolCatalogIndex, and ProxyManager.
  • Native Tools: 11 core tools covering skill discovery (search_skills), execution (execute_skill), workflow management (resolve_workflow), and system admin (proxy_server_status), following the "skill as code"理念.
  • Proxy Aggregation: Uses prefix namespace (${serverId}_${tool.name}) to avoid conflicts (e.g., github_create_issue) and supports subservers like GitHub, file system, ElevenLabs, and Supabase.
4

Section 04

Dynamic Discovery & Security Mechanisms

Dynamic Discovery: Two modes are available:

  • Legacy Mode (default): Shows all native + proxy tools.
  • Dynamic Mode: Only native tools are visible by default; proxy tools need activation via discover_tools (but can be called directly with full prefix).

Security:

  • RBAC system with admin (full access), developer (most tools), and readonly (query-only) roles.
  • HITL approval: Sensitive operations require a _evokore_approval_token for human confirmation.
  • Rate limiting: Per-server/tool token bucket configs to prevent abuse.
5

Section 05

Session Management & Voice Integration

Session Continuity:

  • Session Manifest: Maintains state (activated tools, pending approvals, execution history).
  • Claude Memory Sync: Ensures cross-session consistency.
  • Repo Audit: npm run repo:audit checks branch status before multi-slice sessions.
  • Dashboard: Web interface at 127.0.0.1:8899 for HITL approvals and monitoring.

Voice Integration: VoiceSidecar runtime via WebSocket (ws://localhost:8888) for voice synthesis, with a hook system for persona-aware workflows and hot-reloadable voice configs.

6

Section 06

Deployment & Technical Details

Tech Stack: TypeScript, Node.js. Deployment Steps:

  1. npm ci to install dependencies.
  2. npm run build to compile.
  3. Copy .env.example to .env and configure env vars (GitHub Token, ElevenLabs API Key).
  4. Point MCP client to compiled entry file.

Cross-platform: Windows-specific handling for npx (mapped to npx.cmd), uv/uvx must be in PATH. Config Sync: npm run sync:dry/npm run sync auto-generate configs for supported CLIs.

7

Section 07

Conclusion: Value of EVOKORE-MCP

EVOKORE-MCP represents a mature step in the MCP ecosystem, integrating fragmented AI tools via abstraction layers, namespace isolation, dynamic discovery, and governance. It balances openness (MCP protocol compliance) with simplicity (unified interface). For AI assistant developers, it offers a robust infrastructure option, and its "aggregate rather than分散" design will grow in value as the AI tool ecosystem expands.