Zing Forum

Reading

mycellm: Decentralized GPU Sharing Network, Enabling Individual Computing Power to Participate in the AI Inference Ecosystem

mycellm integrates GPU resources distributed globally into a P2P inference network via the QUIC protocol and Ed25519 authentication. It allows users to both contribute computing power to earn credits and use open-source large models for free, without relying on cloud service providers.

mycellmP2PGPU共享去中心化推理QUICEd25519llama.cpp开源模型算力网络隐私保护
Published 2026-03-31 13:12Recent activity 2026-03-31 13:22Estimated read 5 min
mycellm: Decentralized GPU Sharing Network, Enabling Individual Computing Power to Participate in the AI Inference Ecosystem
1

Section 01

Introduction / Main Floor: mycellm: Decentralized GPU Sharing Network, Enabling Individual Computing Power to Participate in the AI Inference Ecosystem

mycellm integrates GPU resources distributed globally into a P2P inference network via the QUIC protocol and Ed25519 authentication. It allows users to both contribute computing power to earn credits and use open-source large models for free, without relying on cloud service providers.

2

Section 02

Background: Computing Power Monopoly and the Need for Decentralization

The current large model inference market is monopolized by a few cloud service providers. Users either have to pay high API fees or own expensive GPU hardware themselves. Meanwhile, there are a large number of idle GPU resources worldwide—from personal gamers' graphics cards to laboratory training equipment—these computing powers are scattered everywhere and cannot form an effective supply network. The mycellm project was born to solve this contradiction: it integrates scattered GPU resources into a decentralized inference network through technical means, allowing direct connection between supply and demand sides of computing power.

3

Section 03

Project Overview: Technical Architecture of the P2P Inference Network

mycellm's core vision is "Pool GPUs worldwide. Earn credits. No cloud required." The project uses a four-layer architecture design:

4

Section 04

1. Canopy Layer: Client Access

It provides multiple access methods, including iOS native apps, command-line chat interfaces, Web UI, and most importantly, OpenAI-compatible APIs. This means any tool that supports the OpenAI API format—such as Claude Code, aider, Continue.dev, etc.—can seamlessly switch to the mycellm network.

5

Section 05

2. Mycelium Layer: Routing and Discovery

This is the core transport layer of the network. It uses the QUIC protocol instead of traditional TCP/HTTP to achieve lower latency and better NAT traversal capabilities. Combined with Kademlia DHT distributed hash table and STUN/ICE technologies, nodes can automatically discover each other and establish direct connections in complex network environments.

6

Section 06

3. Roots Layer: Inference Computing

The underlying inference engine is based on llama.cpp, supporting multiple backends such as Metal (Apple Silicon), CUDA (NVIDIA), ROCm (AMD), and CPU. It also supports vLLM for higher throughput scenarios.

7

Section 07

4. Ledger Layer: Credit Accounting

It uses an Ed25519-signed receipt system to generate encrypted proofs for each inference request, enabling a verifiable accounting mechanism, but it does not rely on blockchain or cryptocurrencies at all.

8

Section 08

Credit Economic Model

mycellm has designed a unique credit system:

  • Seeder: Nodes running inference services earn credits by providing computing power
  • Consumer: Users who use inference services consume credits to get model outputs
  • Ed25519-signed receipt: Each request has an encrypted signature as an accounting voucher

The ingenuity of this design is that it not only incentivizes computing power contributions but also avoids the complexity and volatility of cryptocurrencies. Credits only circulate within the network and are completely isolated from fiat currencies or cryptocurrencies.