# LLM-WarRoom: Multi-Model AI Reasoning Workbench and Advisor-Perspective Decision Framework

> A local-first multi-model AI reasoning workbench inspired by Karpathy's LLM Council, supporting independent responses, anonymous peer reviews, and a pressure-tested decision-making process with five advisor perspectives.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-04-28T11:14:44.000Z
- 最近活动: 2026-04-28T11:23:16.185Z
- 热度: 163.9
- 关键词: LLM, multi-model, reasoning, AI workbench, decision support, Karpathy, advisor lens, FastAPI, React, local-first
- 页面链接: https://www.zingnex.cn/en/forum/thread/llm-warroom-ai
- Canonical: https://www.zingnex.cn/forum/thread/llm-warroom-ai
- Markdown 来源: floors_fallback

---

## [Introduction] LLM-WarRoom: Multi-Model AI Reasoning Workbench and Advisor-Perspective Decision Framework

LLM-WarRoom is a local-first multi-model AI reasoning workbench inspired by Karpathy's LLM Council, positioned as a pragmatic reasoning assistant tool. It supports independent responses, anonymous peer reviews, and provides a pressure-tested decision-making process with five advisor perspectives, helping users obtain multi-model viewpoints, identify points of divergence, and improve the reasoning quality of complex decisions.

## Background and Motivation

As the capabilities of large language models improve, single-model responses can hardly meet the needs of complex decision-making. The LLM Council concept proposed by Andrej Karpathy demonstrates the reliability of multi-model collaboration, and LLM-WarRoom translates this idea into an actionable local tool, with additional inspiration from Ole Lehmann's Claude Council skill.

## Technical Architecture

A local web application using FastAPI for the backend and React for the frontend, following the local-first principle: all conversations and runtime outputs are written to the local data/ folder. The backend includes the FastAPI framework, a model alias system, and multi-provider support; the frontend uses the React tech stack, and the local server is started via npm run dev.

## Core Workflow and Advisor Perspectives

**Two Working Modes**: 
1. Ask Mode: Submit Question → Independent Response → Anonymous Peer Review → Final Synthesis 
2. War Room Mode: Frame the Question → Advisor Perspective Analysis → Anonymous Review → Final Decision 
**Five Advisor Perspectives**: Contrarian, First Principles Thinker, Expansionist, Outsider, Executor—all are prompt-based roles.

## Model Configuration and Provider Support

Models are managed via environment variables and configuration files: 
- Environment variables need to set API keys (e.g., OPENAI_API_KEY) and a list of model aliases 
- Model aliases include openai_primary (gpt-5.1), claude_primary (claude-sonnet-4-20250514), etc. 
- Supports OpenAI (direct integration), Anthropic (requires package installation), and OpenRouter (free tier suitable for experiments).

## Data Storage and Output Management

Local-first storage strategy: 
- Each runtime output is stored in data/runs/<run_id>/, including run.json (machine-readable), summary.md (human-readable), and decision.json (when applicable) 
- Conversation history is stored in data/conversations/, and outputs are ignored by git by default to ensure user data control and audit trails.

## Use Cases and Notes

**Suitable Scenarios**: Complex decision analysis, creative divergence and convergence, risk assessment, solution validation 
**Notes**: Outputs from this experimental software may be incorrect; prompts are sent to model providers; do not paste sensitive information; keep the .env file private.

## Summary and Future Directions

LLM-WarRoom represents a new AI tool paradigm, improving reasoning quality through multi-model collaboration. Its local-first design, advisor perspective framework, and flexible configuration make it a powerful tool for complex decisions. Future enhancements can include expanding endpoints (e.g., /api/cases/evaluate) to further enrich functionality.
