# GNAA: A Two-Stage Framework for Verifiable Reasoning Based on Graph Neural Action Architecture

> An experimental two-stage agent network project that constructs observable reasoning and evaluation processes for small language models through multi-node collaboration, judgment backtracking, and tool enhancement.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-16T03:06:22.000Z
- 最近活动: 2026-05-16T03:19:57.886Z
- 热度: 159.8
- 关键词: 多智能体, 小型语言模型, 两阶段推理, 图神经网络, RAG, 智能体协作, MCP协议, GAIA基准
- 页面链接: https://www.zingnex.cn/en/forum/thread/gnaa
- Canonical: https://www.zingnex.cn/forum/thread/gnaa
- Markdown 来源: floors_fallback

---

## [Introduction] GNAA: A Two-Stage Framework for Verifiable Reasoning for Small Language Models

GNAA is an experimental two-stage agent network project. Addressing the pain point of insufficient support for small language models in resource-constrained scenarios, it constructs an observable and verifiable reasoning process through multi-node collaboration, judgment backtracking, and tool enhancement, reducing reliance on a single strong model.

## Project Background and Core Objectives

Existing LLM multi-agent solutions rely on strong models and lack sufficient support for resource-constrained scenarios. The goal of GNAA is to build an observable reasoning chain that connects agent candidate answers, judgment scores, tool evidence, etc., while maintaining reasoning quality and reducing dependence on a single model.

## Two-Stage Architecture Design

1. First stage: Multi-agent parallel generation of candidate answers (breadth-first exploration), adjust node weights and backtrack through judgment perception scores to identify key reasoning paths;
2. Second stage: Select top-k nodes and equip them with tools like search/RAG/memory to complete answers, integrate results through a solver + critic strategy to achieve self-correction.

## Memory and Retrieval Augmentation System

The memory system includes three types: working memory (short-term context), episodic memory (case reuse), and semantic memory (error avoidance); RAG is based on the Qdrant vector database, supporting efficient semantic retrieval to compensate for the knowledge gaps of small models.

## Protocol Support and Evaluation Benchmarks

Protocols support MCP/A2A/ANP, seamlessly integrating into the agent ecosystem; evaluation adapts to GAIA (real reasoning) and BFCL (function call). Local GAIA data is located at test/data/gaia/2023/, and a dataset loader is provided for easy debugging.

## Technical Implementation Details

Layered architecture: core/llm.py supports OpenAI-compatible interfaces, flexibly switching between backends like Ollama/vLLM; environment variables support multi-platform APIs; built-in tools include a secure calculator (AST parsing), search, memory, etc. Terminal tools are recommended to be used under control.

## Current Status and Outlook

The project is in the development stage. Core modules are available, but package imports/test entry points are pending improvement; maintainers have optimization plans, and its architectural direction provides an exploration path for enhancing the capabilities of small models.

## Summary

GNAA enhances capabilities through systematic design. The two-stage architecture + judgment backtracking + multi-tool collaboration form a small model enhancement solution, providing a reference paradigm for deploying agents in resource-constrained environments.
