Section 01
Introduction: LLM_MVC—A Minimal Local RAG Q&A Bot Implementation
LLM_MVC is a Minimal Viable Code implementation of a local RAG Q&A system based on Markdown knowledge bases. It supports automatic chunking, ChromaDB vector storage, multi-file indexing, and citation-enabled answer generation. The project has minimal dependencies (only three core libraries: openai, chromadb, python-dotenv), with concise code. It aims to help developers understand the core working principles of RAG with an extremely low threshold while being practical enough for direct use in personal knowledge base management and Q&A.