# oh-my-remote-ai: Turn Slack into a Remote Controller for AI Programming Assistants

> An open-source tool that allows you to continue controlling local or cloud-based AI programming sessions like Claude Code, Codex, and Gemini via Slack—no new proxy or remote IDE needed, and you can jump into development work anytime from your phone.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-04-21T14:16:16.000Z
- 最近活动: 2026-04-21T14:21:23.860Z
- 热度: 161.9
- 关键词: Slack, Claude Code, Codex, Gemini, AI编程, 远程开发, tmux, 开源工具, Rust
- 页面链接: https://www.zingnex.cn/en/forum/thread/oh-my-remote-ai-slack-ai
- Canonical: https://www.zingnex.cn/forum/thread/oh-my-remote-ai-slack-ai
- Markdown 来源: floors_fallback

---

## oh-my-remote-ai: Turn Slack into a Remote Controller for AI Programming Assistants (Introduction)

oh-my-remote-ai is an open-source tool designed to solve the problem where developers can't continue interacting with their AI programming assistants after leaving their workstations. It allows you to control local or cloud-based AI programming sessions (like Claude Code, Codex, Gemini) via Slack—no need to set up a new proxy or remote IDE. You can jump into development work anytime from your phone, maintaining the continuity of your existing workflow.

## Project Background and Core Issues

Modern AI programming assistants (e.g., Claude Code, Codex, Gemini CLI) are often limited to terminal or specific IDE environments, creating access barriers when developers leave their workstations. Existing solutions (remote IDEs or proxy services) have pain points like complex configuration, high resource consumption, or the need to migrate workflows. The core idea of this project is to make existing AI assistants 'understand' Slack commands—without creating new proxies or setting up remote IDEs.

## Technical Architecture: Three-Layer Decoupled Design

The system uses a three-layer loosely coupled architecture: 1. Slack acts as the remote UI, receiving commands via Slash Commands and displaying responses; 2. tmux sessions keep AI agents running persistently, unaffected by SSH disconnections, with commands injected into the corresponding sessions; 3. Hook event relays capture AI outputs, format them, and send them back to Slack, supporting asynchronous tracking of long-running tasks.

## Installation and Configuration Process

Installation includes an interactive setup wizard: check environment → create a Slack app (provides manifest configuration) → collect Slack Token and other configurations and write to .env.local → build and install. Advanced users can generate templates via command line, merge patches, perform non-interactive installations, and register as a system daemon (currently supports macOS launchd).

## Multi-Agent Support and Parallel Sessions

Natively supports multiple AI agents, interacting via different Slash Commands: /cc (Claude Code), /cx (Codex), /gm (Gemini CLI). Each agent's context is stored independently in a tmux session, allowing parallel execution of different tasks (e.g., code review, refactoring, document generation) without mutual interference.

## Practical Application Scenarios

Typical scenarios include: 1. Mobile code review: Send commands via Slack during commutes, and Claude Code completes the review and returns results; 2. Long-running task monitoring: Receive task completion/error notifications via Slack after shutting down your computer, and check progress anytime; 3. Transparent team collaboration: AI operation records are stored in Slack threads, allowing team members to browse or intervene asynchronously.

## Technical Debt and Future Plans

Current limitations: Service management only supports macOS; Codex/Gemini sessions may fall back to Claude Code after restart. Future plans: Add Discord and Telegram support, integrate OpenCode (/oc), and improve tool versatility.

## Open-Source Value and Community Participation

The project is open-sourced under the MIT license, with clear code structure and comprehensive documentation. Bug reports, feature suggestions, and code contributions are welcome. It represents the evolutionary trend of AI programming assistants toward 'always accessible, everywhere usable' infrastructure, providing existing AI users with a zero-cost way to expand coverage.
