Zing Forum

Reading

InfraMind: An Automated Infrastructure Root Cause Analysis System Based on Multi-Agent RAG

InfraMind is an LLMOps platform designed specifically for SRE and DevOps teams. It leverages multi-agent orchestration on AWS Bedrock, Retrieval-Augmented Generation (RAG), and self-correcting LLM workflows to enable zero-touch incident classification and root cause analysis.

多智能体RAG根因分析AWS BedrockLLMOpsAIOps运维自动化ChromaDB自校正工作流
Published 2026-04-17 01:16Recent activity 2026-04-17 01:22Estimated read 5 min
InfraMind: An Automated Infrastructure Root Cause Analysis System Based on Multi-Agent RAG
1

Section 01

[Introduction] InfraMind: An Automated Root Cause Analysis System Driven by Multi-Agent RAG

InfraMind is an LLMOps platform designed specifically for SRE and DevOps teams. Built on AWS Bedrock's multi-agent orchestration, Retrieval-Augmented Generation (RAG), and self-correcting LLM workflows, it enables zero-touch incident classification and root cause analysis, addressing the core pain points of difficult troubleshooting and high Mean Time to Recovery (MTTR) in cloud-native architectures.

2

Section 02

Project Background and Challenges

In modern cloud-native architectures, traditional monitoring alerts can only notify of failures but cannot automatically analyze causes or solutions. As system scale expands and component complexity increases, manual root cause investigation becomes extremely difficult, leading to rising MTTR. InfraMind aims to solve this operational pain point by implementing zero-touch incident classification and comprehensive observability through LLMOps technology.

3

Section 03

Core Modules of System Architecture

  1. Data Ingestion Layer: Retrieves raw logs from S3 via Airflow DAG, processes them into standardized JSON;
  2. RAG Knowledge Base: Operational manuals are vectorized using AWS Titan Embed v2 and stored in ChromaDB. The top 6 documents are retrieved and re-ranked using MMR to optimize context;
  3. Dynamic Model Selection: For logs shorter than 2000 characters, the Llama3 8B model is used; for longer logs, the 70B model is used to balance quality and cost.
4

Section 04

Multi-Agent Analysis Workflow

Five-stage collaboration based on AWS Bedrock:

  1. Investigation agent generates incident summary;
  2. Root cause analysis agent identifies the fundamental cause of the failure;
  3. Repair plan generation agent outputs detailed steps;
  4. Formatting agent integrates into structured RCA JSON;
  5. Critic agent (Mistral 7B) scores (threshold 0.8), and if it fails to meet the threshold, self-correction and retries are performed (up to 2 times).
5

Section 05

Technical Innovation Highlights

  1. Multi-agent division of labor and collaboration improve analysis quality;
  2. Self-correction mechanism ensures output reliability;
  3. RAG integration with enterprise operational knowledge enhances professionalism;
  4. Dynamic model selection controls inference costs.
6

Section 06

Observability and Output Delivery

Integrates MLflow (hosted on DagsHub) to track the entire lifecycle, DeepEval to evaluate generated content quality, and Grafana to display metrics such as throughput, cost, and latency; RCA results are stored in the S3 rca-results/ directory, and alerts are sent via Slack to achieve closed-loop notification.

7

Section 07

Practical Insights and Reference Value

InfraMind provides a full-link reference architecture for AIOps platforms, demonstrating the application value of LLM agents in operational scenarios; for teams building similar systems, they can draw on its design ideas such as multi-agent collaboration, self-correction mechanism, and RAG integration.