Zing Forum

Reading

LLM Dungeon Crawler: When Large Language Models Meet Classic Dungeon Exploration

Explore an open-source game that combines traditional text-based RPG with modern AI technology, using local LLMs to generate immersive narrative experiences

LLM地牢探险文字RPGOllamaGemma开源游戏本地AI程序生成
Published 2026-04-07 16:14Recent activity 2026-04-07 16:28Estimated read 6 min
LLM Dungeon Crawler: When Large Language Models Meet Classic Dungeon Exploration
1

Section 01

Introduction: LLM Dungeon Crawler – Fusion of Classic Dungeon Exploration and Local AI

LLM Dungeon Crawler is an open-source text adventure game developed by Jonathan B. Coe, combining traditional dungeon exploration gameplay with modern Large Language Model (LLM) technology. It runs locally (deploying models like Google Gemma 4 via Ollama), providing an immersive narrative experience without needing an internet connection. The project's core lies in its dual-engine design: the game engine handles rule logic such as character attributes and combat calculations, while the LLM generates environment descriptions and NPC dialogues, balancing gameplay depth and content diversity.

2

Section 02

Background and Core Design Philosophy

The project aims to separate "computable game logic" from "creative narrative generation": the game engine ensures gameplay consistency and balance, while the LLM focuses on imaginative scene descriptions. It uses the Google Gemma 4 model by default, running locally via Ollama to protect player privacy (data never leaves the device), making it suitable for offline users or those who value privacy.

3

Section 03

Technical Architecture and Implementation Details

The project is developed based on Python, using uv as the package management tool. It interacts with local LLMs by calling Ollama's REST API, and model replacement is simple (supports the --model parameter to specify Ollama-compatible models). The core loop follows the classic dungeon exploration model: the dungeon is procedurally generated, and when the player explores, the game formats the scene state (position, environment, etc.) into prompts sent to the LLM, which generates descriptive text to present to the player.

4

Section 04

Gameplay and Interaction Methods

It uses text command interaction and supports the following operations:

  • Movement and exploration: go north/south/east/west
  • Combat: attack (engine handles damage and hit rate)
  • Item management: take/use/equip/unequip <item>
  • Social interaction: talk (LLM generates dynamic NPC dialogues with no preset scripts)
  • Information query: look (re-describes the room), status/inventory (character status/inventory), help (command list).
5

Section 05

Local Deployment and Configuration Steps

Deployment requires three prerequisites: install uv (Python package manager), install Ollama (local LLM running tool), and pull the model via Ollama (default recommendation: gemma4:e4b). Startup command: uv run dungeon-crawler (automatically starts the Ollama service); to change the model, add the --model parameter (e.g., uv run dungeon-crawler --model llama3). The zero-configuration design is suitable for non-technical players while retaining customization space.

6

Section 06

Application Scenarios and Expansion Possibilities

The project is not just a game; it can also be extended to:

  • Education: transformed into a foreign language learning tool (practice dialogue with AI NPCs)
  • Creative writing: generate story scenes as an inspiration source
  • Game prototyping: quickly validate narrative-driven mechanisms
  • AI research: explore the behavioral characteristics of LLMs in interactive environments.
7

Section 07

Summary and Future Outlook

LLM Dungeon Crawler is a small but refined open-source project that successfully integrates classic games with local AI technology. Its code is concise and easy to understand, making it an excellent example of local LLM integration. For players, it provides a different exploration experience every time; for developers, it demonstrates how to create offline AI interactive content. As local LLM capabilities improve, we look forward to more similar projects emerging, integrating AI into various games and interactive applications.