Zing Forum

Reading

Latent Relay: Building a Bridge Between Closed-Source and Open-Source Large Models in Latent Space

An innovative MCP server project that enables closed-source models like Claude to use the interpretable internal representations of open-source models for reasoning calibration via SAE feature extraction technology, achieving latent space collaboration across model architectures.

Latent RelayLatentMASERISSAE稀疏自编码器隐空间多智能体ClaudeGemmaMCP
Published 2026-04-02 15:44Recent activity 2026-04-02 15:51Estimated read 7 min
Latent Relay: Building a Bridge Between Closed-Source and Open-Source Large Models in Latent Space
1

Section 01

Introduction: Latent Relay—A Latent Space Bridge Connecting Closed-Source and Open-Source Large Models

Latent Relay is an innovative MCP server project aimed at resolving the divide between the strong reasoning capabilities of closed-source models (e.g., Claude) and the interpretability of open-source models (e.g., Gemma). Using SAE feature extraction technology, it builds a latent space communication channel between closed-source and open-source models, enabling deep cross-architecture collaboration rather than simple text dialogue.

2

Section 02

Project Background and Core Challenges

The current large model ecosystem is polarized: closed-source models (Claude, GPT-4) have strong reasoning capabilities but are internal black boxes; open-source models (Gemma, Llama) are transparent but slightly less capable. Core question: Can we combine the strong reasoning of closed-source models with the interpretability of open-source models? Based on LatentMAS research results, Latent Relay builds a REST/MCP server layer to enable model interaction in the representation space of neural network hidden layers.

3

Section 03

Three-Tier Progressive Technical Architecture

Tier 1: LatentMAS Base Server

Provides REST/MCP interfaces, supports loading any model, with features including hidden state extraction, implicit thought trajectory recording, SAE analysis, precise injection, and MCP compatibility, enabling internal model transparency.

Tier 2: ERIS v5 Orchestration Engine

Coordinates interaction between reasoning models and probe models: OrchestratorLLM performs step-by-step reasoning; every N steps, ProbeModel extracts activation states, DriftDetector calculates drift, and if it exceeds the threshold, feedback is sent to OrchestratorLLM for calibration—no modification to closed-source model parameters is needed.

Tier 3: ERIS V2 SAE Drift Detection

Upgraded to SAEProbe (Gemma3 + Gemma Scope2 SAE), where SAE features correspond to interpretable concepts (sparse activation of approximately 50 features). Drift detection uses dual metrics: Jaccard distance (concept difference) + cosine distance (numerical change).

4

Section 04

Concept Guidance and Multi-Agent Coordination Mechanism

Concept Guidance

Direction vectors are obtained via contrastive prompts (e.g., "rigorous solving" vs "quick answer"), which can be applied in three modes: addition mode (amplify concepts), projection elimination (suppress concepts), and replacement mode (hard redirection), enabling fine-grained behavior control.

Multi-Agent Coordination

MultiAgentCoordinator supports three modes: isolation mode (independent operation), shared medium mode (shared drift detector), and collaboration mode (shared reasoning history), suitable for different scenario needs.

5

Section 05

Hardware Requirements and Deployment Practices

Hardware requirements for each component:

  • Base server tier: 12GB VRAM (Qwen3.5-4B) / 24GB (Qwen3-14B)
  • ERIS v5 orchestration tier: Can run on pure CPU (API calls only)
  • ERIS v5 local probe: 24GB recommended, 40GB preferred
  • ERIS V2 SAE probe: Gemma3 9B requires A100 80GB; 27B requires H100 80GB

Users without high-end GPUs can use cloud solutions: run orchestration logic on CPU and delegate probe inference to remote services.

6

Section 06

Rigorous Gate-Keeping Test Validation Process

The project uses gate-keeping tests to ensure reliability:

  • Gate 0: Verify SAE mathematical validity, average number of activated features: 5-500
  • Gate 1: Drift prediction of reasoning errors, Spearman correlation coefficient ≥0.35
  • Gate 2 (to be implemented): Probe detection accuracy, AUC ≥0.60
  • Gate 3 (to be implemented): Intervention effect, accuracy improvement ≥5 percentage points
  • Gate 4 (to be implemented): Model scale effect, AUC improvement ≥5 percentage points between 27B and 9B models
7

Section 07

Application Scenarios and Future Outlook

Application Scenarios

  • Reasoning process visualization: "see" the concepts the model is thinking about via SAE features
  • Error warning and correction: Drift detection provides early warnings and correction suggestions
  • Model capability enhancement: Concept guidance boosts specific capabilities (e.g., mathematical reasoning)
  • Cross-model knowledge transfer: Latent space communication breaks model silos

Future Outlook

Currently in Phase2 (SAE drift detection pipeline activated), the next step will be to run AIME problem validation scripts and start the gate-keeping test sequence, which is expected to open up a new paradigm for large model applications.