Zing Forum

Reading

Omni-MCP: A Unified Routing Server for Local Multimodal Models on Mac

Omni-MCP is a multimodal MCP server that automatically routes input content to local visual, audio, or text models on Mac M-series chips, enabling a single interface to handle all modalities.

MCP多模态本地AIApple SiliconOllama视觉模型语音识别Claude
Published 2026-04-04 04:06Recent activity 2026-04-04 04:18Estimated read 5 min
Omni-MCP: A Unified Routing Server for Local Multimodal Models on Mac
1

Section 01

Omni-MCP: Introduction to the Unified Routing Server for Local Multimodal Models

Omni-MCP is a multimodal MCP server designed for Mac M-series chips. Its core goal is to handle multiple modal inputs such as text, images, and audio through a unified interface. It can automatically route to the appropriate local models (e.g., Ollama Qwen3.5 for text, vllm-mlx Qwen3-VL for vision, mlx-whisper Whisper Large v3 Turbo for audio), enabling a local-first privacy protection and low-latency experience. It also seamlessly integrates with Claude Desktop, providing developers with a concise and efficient multimodal AI integration solution.

2

Section 02

Fragmentation Challenges in Multimodal AI Integration

As large language models evolve toward multimodality, developers face fragmentation issues in calling methods, parameter formats, and inference backends for different models (text, image, audio), which greatly increases development complexity. Omni-MCP aims to solve this integration problem through a unified server and MCP protocol.

3

Section 03

Core Architecture: Automatic Modality Detection and Unified Interface

Omni-MCP follows the concept of "one server, one protocol, all modalities":

  • Automatic input modality detection: pure text → text model, image included → vision model, audio included → audio model (if both image and audio exist, audio takes priority);
  • Unified query(prompt, image?, audio?) interface: clients do not need to care about the underlying models; they only need to call this interface to process the corresponding modal input.
4

Section 04

Tech Stack and Apple Silicon Optimization

Omni-MCP is optimized specifically for Mac M-series chips, making full use of the Neural Engine:

  • Text model: Ollama running Qwen3.5;
  • Vision model: vllm-mlx running Qwen3-VL (Ollama as an alternative);
  • Audio transcription: mlx-whisper running Whisper Large v3 Turbo; All inference is done locally, protecting privacy and reducing latency.
5

Section 05

Configuration System and Claude Desktop Integration

  • Configuration: Set via environment variables prefixed with OMNI_ (e.g., Ollama/vllm-mlx API endpoints, model names, log levels, timeouts, etc.);
  • Claude Desktop integration: Modify ~/Library/Application Support/Claude/claude_desktop_config.json, add MCP server configuration (specify the path to the uv running script), and it will automatically connect on startup to enable multimodal dialogue.
6

Section 06

Modular Structure and Extensibility

The project adopts a modular design:

  • Entry point: server.py (FastMCP framework);
  • Routing logic: router.py;
  • Configuration management: config.py;
  • Data models: schemas.py;
  • Backend adaptation: ollama.py, vllm_mlx.py (abstract base class pattern); The development toolchain includes uv, pytest, and ruff. The MIT license allows free modification and extension.
7

Section 07

Local-First Value and Summary

Omni-MCP fills the gap in the local multimodal AI ecosystem:

  • Privacy advantage: All data is processed locally with no third-party uploads;
  • Performance advantage: Eliminates network latency for faster responses;
  • Developer value: Standardized interfaces lower integration barriers, and flexible configuration and extensibility support custom needs. For developers building privacy-first multimodal applications on Mac, it is an important tool.