Zing Forum

Reading

local-multi-agent-company: Localized Multi-Agent Software Development Team Architecture

A multi-agent coding system for the Unraid platform, using LangGraph to orchestrate 17 professional Worker roles, supporting GitHub automation, hierarchical deployment, and local LLM routing to achieve a controllable AI-driven software development process.

多智能体LangGraphUnraidGitHub自动化本地LLM软件开发Worker编排MistralQwen分级部署
Published 2026-04-11 20:14Recent activity 2026-04-11 20:21Estimated read 8 min
local-multi-agent-company: Localized Multi-Agent Software Development Team Architecture
1

Section 01

Introduction: Core Overview of the local-multi-agent-company Project

local-multi-agent-company is a localized multi-agent software development team architecture project for the Unraid platform. It uses LangGraph to orchestrate 17 professional Worker roles, supporting GitHub automation, hierarchical deployment, and local LLM routing to achieve a controllable AI-driven software development process. The core of the project is to simulate the division of labor and collaboration of a professional development team, solving the problem that a single AI assistant cannot meet the professional requirements of multiple links, while balancing data privacy, controllability, and development efficiency.

2

Section 02

Project Background: Limitations of Single AI Assistants and the Need for Multi-Agent Collaboration

Today, with the popularity of AI-assisted programming, a single AI assistant can hardly meet the professional requirements of multiple links such as requirement analysis, architecture design, coding implementation, and test verification at the same time. The local-multi-agent-company project aims to simulate the operation process of a complete software development company by building a multi-agent system, allowing AI to collaborate like a professional team.

3

Section 03

Core Architecture: Division of Labor System for 17 Professional Workers

The project includes 17 specialized Workers with clear division of labor:

  • Orchestrator: Core orchestrator, responsible for task reception, workflow control, and state persistence;
  • Requirement Analysis Layer: requirements-worker extracts requirements, cost-worker estimates resources, human-resources-worker recommends Worker configurations;
  • Research & Design Layer: research-worker analyzes repositories, architecture-worker designs architectures, data-worker provides data processing suggestions, ux-worker provides UI suggestions;
  • Implementation Layer: coding-worker executes coding, reviewer-worker reviews code, test-worker performs testing, security-worker identifies risks;
  • Delivery Layer: validation-worker verifies requirements, documentation-worker creates documents, github-worker manages code submissions, deploy-worker executes hierarchical deployment, qa-worker performs testing, memory-worker persists learning outcomes;
  • Additionally, there is a web-ui that provides task management and monitoring interfaces.
4

Section 04

Local-First & Orchestration Mechanism: Controllability and Reliability Design

The project adopts a local-first design, containerized deployment on the Unraid platform, uses local LLM endpoints (default Mistral and Qwen series), and keeps data local to ensure privacy. In terms of controllability, hierarchical deployment is default (manual approval required from Staging to production), and approval is enforced at key nodes. Orchestration uses the LangGraph framework, supporting complex state transitions and parallel execution; state is persisted to an SQLite database to ensure task recoverability.

5

Section 05

Model Routing & Integration Capabilities: Efficiency and Process Integration

The project supports flexible model routing: Mistral is used for lightweight tasks (document generation, classification), Qwen for heavy-load tasks (architecture design, security review), and each Worker can be configured with model parameters independently. It deeply integrates GitHub as the code management source; github-worker is responsible for code submission and PR creation, and uses a repository whitelist and change confirmation mechanism to ensure security.

6

Section 06

Security & Observability: Risk Prevention and Debugging Tools

Security mechanisms include: untrusted external content handling, prompt injection identification, key management (local file storage, Docker read-only mount), and Shell command whitelist. The web interface provides functions such as task management, Worker configuration, and trusted source management; the Debug center supports system snapshot download and runtime file diagnosis, facilitating troubleshooting and configuration optimization.

7

Section 07

Applicable Scenarios & Usage Recommendations: Implementation Guide

Applicable scenarios: Local developers/small teams, privacy-sensitive enterprises, multi-agent architecture researchers, Unraid users. Usage recommendations: Start with simple tasks for the first deployment; adjust model routing and timeout configurations according to hardware; use the approval mechanism to maintain supervision at key nodes; regularly optimize learning outcomes accumulated by memory-worker; verify on a small scale first before expanding to production environments.

8

Section 08

Conclusion: Exploration of Professionalization Direction for AI-Assisted Programming

local-multi-agent-company represents an important attempt in the development of AI-assisted programming towards professionalization and systematization. By simulating the division of labor in real teams, it demonstrates that multi-agent architecture has more reliable and controllable automation capabilities than a single AI. Although deployment and tuning require technical investment, it has important research value for users exploring the boundaries of AI development applications.