Zing Forum

Reading

RepoReasoner: An Automated Benchmark Framework for Evaluating Large Language Models' Reasoning Capabilities at the Code Repository Level

An automated benchmark system for evaluating large language models' reasoning capabilities at the granularity of real code repositories, supporting two tasks—output prediction and call chain prediction—and filling the gap in existing benchmarks regarding code granularity.

代码推理基准测试大语言模型软件工程代码理解自动化评估
Published 2026-04-08 17:12Recent activity 2026-04-08 17:18Estimated read 6 min
RepoReasoner: An Automated Benchmark Framework for Evaluating Large Language Models' Reasoning Capabilities at the Code Repository Level
1

Section 01

RepoReasoner Framework Guide: An Automated Evaluation Benchmark for Repository-Level Code Reasoning Capabilities

RepoReasoner is an automated benchmark framework for evaluating large language models' reasoning capabilities at the granularity of real code repositories, filling the gap in existing function-level code evaluation benchmarks. The framework supports two core tasks—output prediction and call chain prediction—evaluating models' code understanding capabilities in scenarios close to real development from both micro and macro dimensions.

2

Section 02

Background: Granularity Limitations of Existing Code Evaluation Benchmarks

Current benchmarks for evaluating large language models' code capabilities mainly focus on the function level, ignoring the complex repository-level dependencies across multiple files and modules in real development. To fill this gap, the DeepSoftwareAnalytics team developed RepoReasoner, which automatically generates test instances from real open-source Python repositories.

3

Section 03

Core Task Design: Dual Evaluation from Micro and Macro Perspectives

RepoReasoner designs two repository-level reasoning tasks:

  1. Output Prediction Task: Given masked code snippets and context files, predict the correct output of the masked assertion statement, testing the ability to reason about variable states and execution paths;
  2. Call Chain Prediction Task: Given a test file, predict the list of other source files called during test execution, focusing on macro-level code dependency understanding.
4

Section 04

Automated Pipeline: From Repository Selection to Benchmark Generation

RepoReasoner's automated benchmark construction pipeline includes four stages:

  1. Repository Selection and Filtering: Select open-source Python repositories with sufficient complexity and test coverage;
  2. Execution-Based Filtering: Validate repositories in a containerized environment and collect dynamic runtime information to ensure the reliability of reference answers;
  3. Semantic Data Rewriting: Generate semantically equivalent but syntactically different code variants to enhance dataset robustness;
  4. Instance Collection and Organization: Extract and filter potential instances from test files to form the final evaluation dataset.
5

Section 05

Flexible Model Integration Support

RepoReasoner supports multiple model evaluation methods:

  • Compatible with OpenAI API interface, allowing access to commercial models like GPT-4 and Claude;
  • Supports loading local models via Hugging Face to meet privacy requirements;
  • Integrates BM25 retrieval mechanism to support retrieval-augmented context generation.
6

Section 06

Application Value and Significance

RepoReasoner's value includes:

  1. Provides evaluation standards close to real development scenarios to identify excellent models;
  2. The automation feature supports continuous evaluation of new models, quickly obtaining the latest performance data;
  3. Reveals gaps in repository-level reasoning capabilities, pointing out directions for model improvement.
7

Section 07

Quick Start Guide

Steps to use RepoReasoner:

  1. Prepare a Python 3.8+ environment and Docker container runtime;
  2. Install dependencies and place the target Python repository in the specified directory;
  3. Configure API keys (for API models) or load local models (Hugging Face);
  4. Run the output prediction or call chain prediction script; results are saved automatically.
8

Section 08

Conclusion: A Significant Advance in Code Intelligence Evaluation

RepoReasoner expands the evaluation of large language models' code capabilities from the function level to the repository level, providing new perspectives and tools for understanding and improving models' performance in real development scenarios. As code intelligence technology evolves, such refined evaluation frameworks will drive technological progress.