Zing Forum

Reading

OpenTulpa: A Self-Hosted Persistent AI Agent Runtime to Make Workflows Smarter Over Time

OpenTulpa is a self-hosted persistent AI agent runtime designed for developers who need durable context, real execution capabilities, and reusable operational memory. It enables context persistence, skill accumulation, and approval gating through FastAPI, LangGraph, and SQLite, supporting multiple interfaces like Telegram, Slack, and internal APIs. This transforms AI agents from one-time conversational tools into intelligent assistants that can learn and evolve over time.

OpenTulpaAI代理自托管持久化LangGraphFastAPITelegram机器人自动化工作流技能积累审批门控
Published 2026-04-01 21:46Recent activity 2026-04-01 21:52Estimated read 7 min
OpenTulpa: A Self-Hosted Persistent AI Agent Runtime to Make Workflows Smarter Over Time
1

Section 01

OpenTulpa: Self-Hosted Persistent AI Agent Runtime—Evolving Workflows Over Time

OpenTulpa is a self-hosted persistent AI agent runtime designed for developers needing durable context, real execution, and reusable operational memory. It addresses the limitation of most AI agents (discarding operational state after each run) by enabling agents to learn and evolve over time. Key features include persistent context storage, multi-tool execution, skill accumulation, approval gating for safe operations, and support for interfaces like Telegram, Slack, and internal APIs. Its tech stack combines FastAPI, LangGraph, and SQLite for flexibility, workflow orchestration, and local data privacy.

2

Section 02

Background: Limitations of Current AI Agents

Most AI agent demos stop at prompt boundaries—they answer requests, call a few tools, then discard all operational state that could make future runs more efficient. This means agents lack true memory of prior interactions, user preferences, or completed tasks, limiting their ability to support long-term, evolving workflows. OpenTulpa was built to break this limitation by providing a persistent runtime for durable context and reusable operational memory.

3

Section 03

Persistent Context: Beyond Prompt Boundaries

OpenTulpa solves the memory problem by storing reusable work components: preferences/directives, files/artifacts, prior decisions, context events, skills, routines, thread rollups, approval records, and link aliases. Example scenario: For a daily market monitoring request, it retrieves prior context/preferences, generates a brief (saved as an artifact), stores the task as a routine, and reuses these elements in future runs—making it an evolving assistant rather than a one-time tool.

4

Section 04

Real Execution: From Text to Action via Multi-Tool Support

Unlike text-only AI assistants, OpenTulpa executes actions via tools:

  • Web retrieval: Search, analyze HTML/PDF/DOCX/images.
  • File operations: Read/write local files, manage directories.
  • Browser automation: Control browsers via Playwright (fill forms, scrape dynamic content).
  • Slack integration: List channels, read history, post messages (with consent).
  • Internal API calls: Integrate with enterprise systems.
  • Custom scripts: Write/save/run scripts for complex workflows.
  • Scheduled routines: Execute one-time or periodic tasks automatically.
5

Section 05

Skill Accumulation & Approval Gating: Safety & Reusability

Skill Accumulation: When a workflow repeats, OpenTulpa saves it as a reusable skill (analyze需求→generate code→save→reuse). Skill types include API integration, data processing, notifications, monitoring, and automation. Approval Gating: To ensure safety, operations are classified:

  • Read/internal actions: Executed directly.
  • External impact actions: Require persistent, one-time, time-limited approval (records stored for audit). This balances autonomy with control over side effects.
6

Section 06

Tech Architecture & Deployment Options

Architecture: Telegram/Internal API/Events → FastAPI (webhooks/routes) → Capture context + retrieve state → LangGraph runtime (planning/tool execution/validation) → Approval gating (external actions) → Persist artifacts/skills/routines → Local storage (.opentulpa/, SQLite/vector store). Deployment:

  • Local: Python3.12+, uv, OpenAI-compatible API key (clone repo, set .env, run start.sh).
  • Docker: Build image, run with .env.
  • Railway: Uses included Dockerfile for automatic deployment.
7

Section 07

Use Cases & Example Requests

Use Cases:

  • Regular market/competition monitoring.
  • Slack/inbox classification and draft generation.
  • Document review and decision extraction.
  • API integration scaffolding and scheduled automation.
  • Project status/execution summaries.
  • Self-hosted developer assistant with safe actions. Example Requests:
  • Summarize Slack’s most important unread items and draft replies.
  • Daily market monitoring and brief delivery.
  • Extract decisions from a PDF and store for future use.
  • Build an API integration skill and schedule it.
  • Check project changes since yesterday and draft a status update.
8

Section 08

Conclusion: OpenTulpa’s Significance

OpenTulpa represents a shift from disposable AI agents to persistent, evolving systems. It offers developers full control (self-hosted, local data), growing capabilities (skill accumulation), and safety (approval gating). As an AI paradigm, it’s sustainable and controllable—ideal for those wanting a personal AI assistant that adapts and improves over time.