Zing Forum

Reading

LLM-MDT: Browser-side Multi-Model Collaboration System, Enabling Large Language Models to Form Multidisciplinary Teams

A pure front-end implemented Multidisciplinary Team (MDT) application that allows multiple large language models to answer user questions collaboratively, reduces model bias through an anonymous review mechanism, and supports a complete three-stage reasoning process.

LLMmulti-agentmultidisciplinary teamVue.jsfrontend-onlyreasoningClaudeOpenRoutercollaborative AImedical AI
Published 2026-05-16 18:13Recent activity 2026-05-16 18:18Estimated read 7 min
LLM-MDT: Browser-side Multi-Model Collaboration System, Enabling Large Language Models to Form Multidisciplinary Teams
1

Section 01

[Introduction] LLM-MDT: Browser-side Multi-Model Collaboration System, Enabling Large Language Models to Form Multidisciplinary Teams

LLM-MDT is a pure front-end implemented Multidisciplinary Team (MDT) application. Its core is to let multiple large language models form a collaborative team to answer questions by simulating the human expert consultation process. It reduces model bias through an anonymous review mechanism and supports a complete three-stage reasoning process. The pure front-end architecture ensures that users' API keys and data are only stored locally, ensuring privacy and security. It also features easy deployment, zero operation and maintenance costs, and fully open-source code.

2

Section 02

Background and Motivation: Limitations of Single LLM and Inspiration from MDT Model

In today's era of rapid AI development, although a single large language model is powerful, it has limitations when facing complex problems—different models have their own advantageous fields and knowledge blind spots. Drawing inspiration from the Multidisciplinary Team (MDT) model in the human medical field to enable multiple AI models to work collaboratively is the core motivation behind the birth of the LLM-MDT project.

3

Section 03

Project Overview: Four Advantages of Pure Front-end Architecture

LLM-MDT adopts a pure front-end architecture that runs entirely on the browser side. All computations and storage are done locally on the user's end, bringing four advantages:

  1. Privacy Protection: API keys and conversation data are only stored locally and not uploaded to servers;
  2. Easy Deployment: No complex back-end required, can be directly deployed to static hosting platforms;
  3. Zero Operation and Maintenance Costs: No server maintenance or downtime risks;
  4. Fully Open Source: Built with Vue3+TypeScript+Tailwind CSS, with transparent code.
4

Section 04

Core Working Mechanism: Three-Stage Collaborative Reasoning Process

LLM-MDT simulates real MDT consultations and is divided into three stages:

Independent Diagnosis

The configured "Council Models" answer questions independently at the same time, and parallel processing ensures the diversity of viewpoints;

Peer Review

All models anonymously score the answers from the first stage, eliminating "model name bias" and ensuring objective reviews;

Comprehensive Decision-Making

The "Chair Model" integrates the original answers and review results to generate the final comprehensive answer.

5

Section 05

Technical Implementation Highlights: Front-end Reasoning, Intelligent API Adaptation, and Local Storage

Front-end Native Reasoning Support

Supports requesting models to provide visible thinking content; for Claude models, it uses adaptive or budget-controlled extended thinking modes;

Intelligent API Adaptation

Flexibly adapts to different providers, such as ZenMux routing to the Anthropic Messages API, with automatic fallback request modes;

Local Persistent Storage

Conversation records, stage information, and review metadata are stored locally via IndexedDB, allowing users to review the complete reasoning process at any time.

6

Section 06

Configuration and Usage: Easy to Get Started in a Few Steps

To use LLM-MDT, you need to configure:

  • Base URL: OpenAI-compatible API endpoints (e.g., OpenRouter, ZenMux);
  • API Key: Personal key;
  • Council Models: List of models participating in answering and reviewing;
  • Chair Model: Model responsible for generating the comprehensive answer;
  • Title Model (optional): Lightweight model for generating conversation titles. It supports services compatible with the OpenAI API, such as OpenAI, OpenRouter, and ZenMux.
7

Section 07

Academic Background and Application Prospects: From Research to Practical Tool

LLM-MDT originates from published research. The related paper "ColaCare: Enhancing Electronic Health Record Modeling through Large Language Model-Driven Multi-Agent Collaboration" has been published at the WWW 2025 conference. This project transforms the MDT concept into a practical tool and has broad application prospects in complex scenarios such as medical diagnosis, legal consultation, and academic research.

8

Section 08

Security and Privacy Considerations & Conclusion

Security and Privacy Notes

  • API keys are stored locally; please pay attention to device or session security;
  • Suitable for personal use or self-hosting, not for public trust deployment;
  • Additional security measures are required for multi-user environments.

Conclusion

LLM-MDT represents a new AI paradigm: through collaborative mechanisms, multiple models complement each other's strengths and weaknesses, lowering the threshold for use and improving interpretability and reliability. With the development of multimodal and agent technologies, such collaborative architectures will show value in more fields.