Orchanet's intelligence layer is built on top of the 0G Compute SDK. Unlike routing LLM calls to centralized API endpoints, every inference task in Orchanet's pipeline is distributed to decentralized provider nodes via the 0G Compute SDK.
The 0G Compute Network adopts a performance-based quality model: provider nodes compete to serve inference requests, and the protocol's economic incentives ensure that high-quality, low-latency responses are prioritized.
Parallel Inference Execution: Orchestrators use Promise.all to dispatch agent tasks in parallel while saturating multiple 0G compute provider nodes. This means a three-agent committee (tokenomics modeler + smart contract auditor + critic) can run their respective inferences concurrently, significantly reducing total pipeline latency while ensuring each agent's output is computed on an independent provider node—eliminating correlated failure risks.
Fine-Grained Token Tracking: Every token consumed by each agent is tracked at a fine granularity, and this data is directly fed into the 0G Ledger settlement process to ensure fair economic accounting for each inference call.