Zing Forum

Reading

PicoCluster Claw: A Localized AI Agent Device with Monthly Electricity Cost Under $2

This article introduces the PicoCluster Claw project, a localized AI agent device built on Raspberry Pi 5 and NVIDIA Jetson Orin Nano. It runs OpenClaw and Ollama to enable fully private LLM inference, with an overall power consumption of only about 15 watts.

PicoClusterRaspberry PiJetson Orin NanoOpenClawOllama本地化 AI边缘计算MCP私有部署低功耗
Published 2026-04-12 13:03Recent activity 2026-04-12 13:28Estimated read 6 min
PicoCluster Claw: A Localized AI Agent Device with Monthly Electricity Cost Under $2
1

Section 01

Introduction / Main Floor: PicoCluster Claw: A Localized AI Agent Device with Monthly Electricity Cost Under $2

This article introduces the PicoCluster Claw project, a localized AI agent device built on Raspberry Pi 5 and NVIDIA Jetson Orin Nano. It runs OpenClaw and Ollama to enable fully private LLM inference, with an overall power consumption of only about 15 watts.

2

Section 02

Project Overview

In today's era of increasingly centralized cloud computing and AI services, data privacy and long-term subscription costs have become key concerns for many users. The PicoCluster Claw project offers a refreshing solution—a fully localized, low-power, self-hosted AI agent device with a monthly electricity cost of less than $2.

This device consists of two core components:

  • clusterclaw (Raspberry Pi 5, 8GB):Responsible for web interface, agent orchestration, and system management
  • clustercrush (Jetson Orin Nano Super, 8GB):Responsible for local large language model inference

Together, they form a private, always-on, low-energy personal AI infrastructure.

3

Section 03

clusterclaw (RPi5 Node)

The Raspberry Pi5 acts as the system's "brain" and runs the following services:

  • Portal (Port 80):Unified entry page for PicoCluster Claw
  • ThreadWeaver (Port 5173):Chat interaction interface
  • OpenClaw Gateway (Port 18789):Agent gateway service
  • Caddy HTTPS Proxy (Port 18790):Secure web access
  • Blinkt! LED Status Indicator:Visual system status
4

Section 04

clustercrush (Orin Nano Node)

The Jetson Orin Nano Super serves as the "computing engine" and focuses on AI inference:

  • Ollama (Port 11434):Local LLM inference engine
  • 9 pre-installed models:Covering general dialogue, reasoning, code, vision, and other capabilities
  • CUDA / cuDNN / TensorRT:Complete NVIDIA AI acceleration stack
  • OpenAI-compatible API:Facilitates migration of existing applications
5

Section 05

Power Consumption and Cost

The power consumption performance of the entire system is impressive:

Scenario Power Consumption Monthly Cost Annual Cost
Idle 14W $1.61 $19.62
Typical Load 20W $2.30 $28.03
90% Idle Mix 15W $1.73 $21.02

Based on current electricity prices, the annual operating cost is only about $21, which is far lower than the monthly fee of most cloud services.

6

Section 06

Pre-installed Model Lineup

The clustercrush node comes pre-installed with 9 large language models, occupying approximately 27GB of NVMe storage:

Model Size Type
llama3.2:3b 2.0 GB General Dialogue (Main)
llama3.1:8b 4.9 GB General Dialogue (High Quality)
gemma3:4b 3.3 GB General (Multilingual)
phi3.5:3.8b 2.2 GB Reasoning Specialized
deepseek-r1:7b 4.7 GB Reasoning (Chain of Thought)
qwen2.5:3b 1.9 GB Code/Structured Output
starcoder2:3b 1.7 GB Code Generation
llava:7b 4.7 GB Visual Understanding
moondream:1.8b 1.7 GB Vision (Lightweight)

Ollama automatically manages model loading and unloading, dynamically allocating GPU memory based on usage needs.

7

Section 07

MCP Agent Toolset

PicoCluster Claw has 5 built-in MCP (Model Context Protocol) servers, providing 28 practical tools for local LLMs:

8

Section 08

LED Control (5 Tools)

Through the Pimoroni Blinkt! LED strip, it enables:

  • Set LED color
  • Display progress bar
  • Pulse animation effect
  • Clear display