Zing Forum

Reading

oh-my-remote-ai: Turn Slack into a Remote Controller for AI Programming Assistants

An open-source tool that allows you to continue controlling local or cloud-based AI programming sessions like Claude Code, Codex, and Gemini via Slack—no new proxy or remote IDE needed, and you can jump into development work anytime from your phone.

SlackClaude CodeCodexGeminiAI编程远程开发tmux开源工具Rust
Published 2026-04-21 22:16Recent activity 2026-04-21 22:21Estimated read 6 min
oh-my-remote-ai: Turn Slack into a Remote Controller for AI Programming Assistants
1

Section 01

oh-my-remote-ai: Turn Slack into a Remote Controller for AI Programming Assistants (Introduction)

oh-my-remote-ai is an open-source tool designed to solve the problem where developers can't continue interacting with their AI programming assistants after leaving their workstations. It allows you to control local or cloud-based AI programming sessions (like Claude Code, Codex, Gemini) via Slack—no need to set up a new proxy or remote IDE. You can jump into development work anytime from your phone, maintaining the continuity of your existing workflow.

2

Section 02

Project Background and Core Issues

Modern AI programming assistants (e.g., Claude Code, Codex, Gemini CLI) are often limited to terminal or specific IDE environments, creating access barriers when developers leave their workstations. Existing solutions (remote IDEs or proxy services) have pain points like complex configuration, high resource consumption, or the need to migrate workflows. The core idea of this project is to make existing AI assistants 'understand' Slack commands—without creating new proxies or setting up remote IDEs.

3

Section 03

Technical Architecture: Three-Layer Decoupled Design

The system uses a three-layer loosely coupled architecture: 1. Slack acts as the remote UI, receiving commands via Slash Commands and displaying responses; 2. tmux sessions keep AI agents running persistently, unaffected by SSH disconnections, with commands injected into the corresponding sessions; 3. Hook event relays capture AI outputs, format them, and send them back to Slack, supporting asynchronous tracking of long-running tasks.

4

Section 04

Installation and Configuration Process

Installation includes an interactive setup wizard: check environment → create a Slack app (provides manifest configuration) → collect Slack Token and other configurations and write to .env.local → build and install. Advanced users can generate templates via command line, merge patches, perform non-interactive installations, and register as a system daemon (currently supports macOS launchd).

5

Section 05

Multi-Agent Support and Parallel Sessions

Natively supports multiple AI agents, interacting via different Slash Commands: /cc (Claude Code), /cx (Codex), /gm (Gemini CLI). Each agent's context is stored independently in a tmux session, allowing parallel execution of different tasks (e.g., code review, refactoring, document generation) without mutual interference.

6

Section 06

Practical Application Scenarios

Typical scenarios include: 1. Mobile code review: Send commands via Slack during commutes, and Claude Code completes the review and returns results; 2. Long-running task monitoring: Receive task completion/error notifications via Slack after shutting down your computer, and check progress anytime; 3. Transparent team collaboration: AI operation records are stored in Slack threads, allowing team members to browse or intervene asynchronously.

7

Section 07

Technical Debt and Future Plans

Current limitations: Service management only supports macOS; Codex/Gemini sessions may fall back to Claude Code after restart. Future plans: Add Discord and Telegram support, integrate OpenCode (/oc), and improve tool versatility.

8

Section 08

Open-Source Value and Community Participation

The project is open-sourced under the MIT license, with clear code structure and comprehensive documentation. Bug reports, feature suggestions, and code contributions are welcome. It represents the evolutionary trend of AI programming assistants toward 'always accessible, everywhere usable' infrastructure, providing existing AI users with a zero-cost way to expand coverage.