# LLM Wiki Agent: Open-source Implementation of Karpathy's Wiki Mode, Supporting Offline Operation

> A knowledge management agent built on the LLM Wiki mode proposed by Andrej Karpathy, using a medallion architecture for hierarchical storage, supporting hybrid deployment of local Ollama inference and cloud Gemini, and fully usable offline.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-04-21T16:14:28.000Z
- 最近活动: 2026-04-21T16:29:31.305Z
- 热度: 157.8
- 关键词: 知识管理, Wiki, Ollama, 本地推理, 知识图谱, Obsidian, Karpathy
- 页面链接: https://www.zingnex.cn/en/forum/thread/llm-wiki-agent-karpathy-wiki
- Canonical: https://www.zingnex.cn/forum/thread/llm-wiki-agent-karpathy-wiki
- Markdown 来源: floors_fallback

---

## Introduction / Main Floor: LLM Wiki Agent: Open-source Implementation of Karpathy's Wiki Mode, Supporting Offline Operation

A knowledge management agent built on the LLM Wiki mode proposed by Andrej Karpathy, using a medallion architecture for hierarchical storage, supporting hybrid deployment of local Ollama inference and cloud Gemini, and fully usable offline.

## Origin: Karpathy's Wiki Mode

LLM Wiki Agent directly originates from a design pattern shared by Andrej Karpathy. The core idea of this pattern is to organize the knowledge base into Wiki-style single-concept pages, establish connections between concepts via `[[wiki links]]`, and use graph traversal as the navigation mechanism.

Unlike traditional document management systems, the Wiki mode emphasizes **discretization and association of concepts**. Each page carries only one core concept, forming a knowledge network with other concepts through links. This structure is naturally suitable for the understanding and reasoning of large language models.

## Core Architecture: Medallion Hierarchical Storage

The project implements a medallion architecture, dividing knowledge into three levels:

## Gold Layer (canon/)

A read-only core knowledge source containing verified authoritative information. The agent can read but will never overwrite content in this layer. In search ranking, Gold Layer results have the highest priority.

## Silver Layer (knowledge/wiki/)

A writable knowledge storage layer for the agent, used to save knowledge organized, summarized, and generated by the agent. This is the main place where the agent performs knowledge work, with priority in search results lower than the Gold Layer.

## Bronze Layer (knowledge/raw/)

Raw data source, including imported documents, web-scraped content, and other unprocessed information. As raw material for knowledge processing, it has the lowest search priority.

This layered design ensures knowledge quality control and traceability, preventing raw noise from contaminating the core knowledge base.

## Hybrid Inference: Free Switch Between Local and Cloud

A key highlight of the project is the flexible choice of inference modes, allowing users to freely combine based on privacy and performance needs:

| Usage Scenario | Dialogue Inference | Embedding Vector | Network Requirement |
|----------------|--------------------|------------------|---------------------|
| Fully Local    | Ollama (Local)     | Ollama (Local)   | None                |
| Hybrid Mode    | Ollama (Local/Cloud) | Gemini (Cloud) | Only for Embedding  |
| Fully Cloud    | Ollama Cloud       | Gemini (Cloud)   | Network Required    |

This design breaks the binary opposition of 'local must be slow' and 'cloud must leak', allowing users to choose the most suitable inference mode based on specific tasks. Sensitive content is processed using local models, while complex reasoning tasks can call cloud capabilities.

## Knowledge Graph: Six Edge Types

The project builds a rich knowledge graph, supporting six edge types to describe relationships between concepts:

- **SIMILAR**: Concept Similarity
- **INTER_FILE**: Inter-file Association
- **CROSS_DOMAIN**: Cross-domain Connection
- **PARENT_CHILD**: Hierarchical Subordination
- **REFERENCES**: Citation Relationship
- **RELATES_TO**: General Association

Each edge carries source information (link_text, link_kind, evidence), ensuring the interpretability and traceability of the graph.
