Zing Forum

Reading

Forge: A Distributed LLM Inference Protocol Using Computing Power as Currency

Forge is a distributed LLM inference protocol built with Rust, which uses computing power itself as a medium of exchange. It defines every 10^9 FLOPs as one Compute Unit (CU), requires no tokens or ICOs, and is directly compatible with the OpenAI API format.

分布式推理算力市场LLMRust去中心化OpenAI APICompute Unit
Published 2026-04-12 19:43Recent activity 2026-04-12 19:48Estimated read 8 min
Forge: A Distributed LLM Inference Protocol Using Computing Power as Currency
1

Section 01

[Introduction] Forge: A Distributed LLM Inference Protocol Using Computing Power as Currency

Forge is a distributed LLM inference protocol built with Rust. Its core concept is "computing power as currency". It defines every 10^9 FLOPs as a standardized computing power unit called Compute Unit (CU). It requires no tokens or ICOs and is compatible with the OpenAI API format. It aims to solve problems like high costs, lack of trust, high entry barriers, and payment friction in centralized computing power services, and build a decentralized computing power trading network.

2

Section 02

Project Background and Motivation: Pain Points of Centralized Computing Power Services

Current LLM inference relies on centralized cloud service providers, which have the following problems:

  • High cost: GPU resources are concentrated, and prices lack transparency
  • Trust issues: Users cannot verify whether the service provider delivers the claimed computing power
  • Entry barriers: Small computing power providers find it hard to access the mainstream market
  • Payment friction: Cross-border computing power transactions are inefficient

Forge's core concept is "computing power as currency". It uses technical means to standardize and verify computing power, making it a direct value unit for exchange.

3

Section 03

Core Mechanism: Design Features of Compute Unit (CU)

Forge defines a standardized computing power unit: Compute Unit (CU) = 10^9 FLOPs of verified inference computation. Key features include:

Verifiability

Each CU corresponds to cryptographically verified actual computation, ensuring that the computation is executed correctly, the results are accurate, and false reporting is impossible.

Standardization

It uniformly converts the computing power of different hardware (NVIDIA A100, AMD MI300, etc.) into FLOPs, enabling cross-platform value measurement.

Instant Settlement

Computing power providers can get CU rewards immediately after verification, without payment terms or exchange rate risks.

4

Section 04

Technical Architecture: Rust Selection and OpenAI API Compatibility

Forge is built using the Rust language. Reasons for choosing Rust:

  • Performance: Close to C/C++ runtime efficiency, suitable for low-latency inference scenarios
  • Security: Memory safety reduces runtime crash risks
  • Concurrency: Ownership model is suitable for high-concurrency network services
  • Ecosystem: Mature asynchronous runtime (Tokio) and WebAssembly support

In addition, it provides a fully compatible interface with the OpenAI API. Existing applications can migrate without modification, developers do not need to learn new specifications, and toolchains can be reused seamlessly.

5

Section 05

Economic Model: Tokenless Design and Market Dynamic Pricing

Tokenless Design

Forge clearly states "no tokens, no ICOs". Advantages:

  • Avoid speculation: CU is linked to actual computing power, no room for hype
  • Regulatory friendly: No involvement in securities compliance risks
  • Value anchoring: Each CU has clear physical support (1e9 FLOPs)
  • Instant usability: No need to wait for token issuance or liquidity establishment

Market Dynamic Pricing

The price of CU is determined by supply and demand. It decreases when supply is sufficient and increases when demand is strong, naturally balancing resource allocation.

6

Section 06

Application Scenarios: Edge Computing Power, Enterprise Sharing, and Decentralized Services

Forge's application scenarios include:

  • Edge computing power utilization: Individuals can connect their idle GPUs to the network and get CU rewards by serving nearby inference requests
  • Enterprise computing power sharing: Rent idle computing power during business troughs and purchase computing power during peak periods to dynamically optimize resources
  • Decentralized AI services: Developers can build AI applications that do not rely on a single cloud service provider, improving availability and censorship resistance.
7

Section 07

Challenges and Reflections: Key Issues in Practical Implementation

Challenges faced by Forge:

  • Verification overhead: Cryptographic verification requires computing resources, so a balance between cost and security is needed
  • Network effect: Sufficient supply and demand parties need to be attracted during the cold start phase
  • Hardware differences: GPUs of different architectures have different performance under the same FLOPs, so fair pricing needs to be optimized
  • Regulatory uncertainty: The global regulatory framework for distributed computing power transactions is not yet clear.
8

Section 08

Conclusion: Forge's Innovative Exploration of the Computing Power Market

Forge redefines the rules of the computing power market. The concept of using computing power as currency avoids the speculative nature of cryptocurrencies while retaining the openness and censorship resistance of decentralized networks. For AI infrastructure and decentralized computing developers, this is a project worth paying attention to. As the demand for LLM inference grows, such innovative protocols may play an important role in the computing power market.