Zing Forum

Reading

AGI-Genkai: A Collection of Experiments Exploring the Cognitive Boundaries of Large Language Models

A benchmark project designed based on a cognitive science framework, using experimental methods inspired by psychology and neuroscience to systematically explore the cognitive boundaries of current AI systems.

AGI评估认知科学基准测试大语言模型元认知开源项目AI安全心理学
Published 2026-05-15 06:42Recent activity 2026-05-15 06:56Estimated read 6 min
AGI-Genkai: A Collection of Experiments Exploring the Cognitive Boundaries of Large Language Models
1

Section 01

AGI-Genkai Project Introduction: Exploring LLM Cognitive Boundaries Using a Cognitive Science Framework

AGI-Genkai is an open-source benchmark project designed based on a cognitive science framework, aiming to systematically explore the cognitive boundaries of current large language models (LLMs). The project's core hypothesis is that general intelligence can be decomposed into 10 key cognitive dimensions. Using experimental methods inspired by psychology and neuroscience, it focuses on areas underserved by existing AI benchmarks (such as metacognition, attention, etc.) to provide empirical guidance for AI research.

2

Section 02

Project Background and Research Motivation

The definition and evaluation criteria of AGI are widely debated in academia and industry. The AGI-Genkai project (Genkai means 'limit') does not rush to claim AGI realization; instead, it first maps the cognitive boundaries of current AI systems. Its core hypothesis: based on cognitive science research, general intelligence can be decomposed into 10 key cognitive ability dimensions. Through targeted experiments, it aims to understand the existing AI capability map and guide future research directions.

3

Section 03

Cognitive Framework and Methodological Features

10 Key Cognitive Ability Dimensions: Perception, Generation, Attention, Learning, Memory, Reasoning, Metacognition, Executive Function, Problem-solving, Social Cognition. The project focuses on underserved dimensions such as learning, metacognition, and attention.

Methodological Features: Interdisciplinary integration (paradigm transfer from psychology/neuroscience), problem-oriented (focus on weak links), evidence-driven (reproducible quantitative results), open-source collaboration (GitHub community contributions).

4

Section 04

Core Experimental Modules and Metacognition Research

Core Experimental Modules:

  1. Learning from Images: Evaluate LLMs' ability to learn conventions from images and transfer them to reasoning;
  2. Learned Helplessness: Explore whether LLMs exhibit human-like giving-up behavior under continuous negative feedback;
  3. Cognitive Maps and Maze Reasoning: Reference classic studies and find that LLM spatial reasoning relies on language pattern matching rather than spatial cognition;
  4. ARC-AGI-3 Interactive Reasoning: Evaluate generalization and abstract reasoning abilities—solvable by humans but extremely challenging for AI.

Metacognition Research: Focus on AI self-assessment abilities, explore responses to absurd questions and the impact of prompt constraints on performance, providing a basis for building reliable AI.

5

Section 05

Connection to DeepMind's AGI Evaluation Framework

AGI-Genkai echoes Google DeepMind's research: it cites DeepMind's 2026 report Measuring Progress Toward AGI: A Cognitive Framework (which has similar cognitive dimension decomposition) and mentions the Kaggle 'Measuring AGI' hackathon hosted by DeepMind, staying aligned with industry frontiers.

6

Section 06

Project Limitations and Future Directions

Limitations: Limited experiment scale (some are in the proof-of-concept stage), need to improve standardization (compared to mature benchmarks like MMLU), insufficient model coverage (mainly mainstream models).

Future Directions: Expand cognitive dimension coverage, establish a strict statistical evaluation framework, introduce more model variants, explore multimodal cognitive assessment.

7

Section 07

Project Significance and Summary

Project Significance: Provide a multi-dimensional AI evaluation perspective, remind of the multi-faceted nature of intelligence; help with AI safety (identify defect risks), model development (guide architecture improvements), and policy-making (set accurate ability expectations).

Summary: AGI-Genkai is a down-to-earth open-source project that maps LLM capability boundaries using a cognitive science framework. Whether or not it leads to AGI, systematically understanding the AI capability map itself has scientific value.