Zing Forum

Reading

LLI-Bench: An Open-Source Benchmark Framework for Evaluating Code Maintainability Using Large Language Models

Introducing the LLI-Bench project, a comprehensive software maintainability evaluation framework that combines SonarQube static analysis, Git socio-technical metrics, and LLM evaluation. It supports validation of code refactoring effects and reproducibility of academic research.

software maintainabilityLLM evaluationSonarQubeGit metricscode qualitytechnical debtrefactoringbenchmark
Published 2026-04-16 17:41Recent activity 2026-04-16 17:52Estimated read 6 min
LLI-Bench: An Open-Source Benchmark Framework for Evaluating Code Maintainability Using Large Language Models
1

Section 01

LLI-Bench: An Open-Source Benchmark Framework for Evaluating Code Maintainability Using Large Language Models (Introduction)

LLI-Bench is a comprehensive software maintainability evaluation framework that integrates SonarQube static analysis, Git socio-technical metrics, and LLM evaluation. It aims to address the problem that traditional tools cannot fully measure code maintainability, supporting validation of code refactoring effects and reproducibility of academic research.

2

Section 02

Project Background and Research Motivation

LLI-Bench originated from the CSC4006 course research project at Queen's University Belfast. Its core hypothesis is that single-dimensional code metrics cannot fully reflect the true maintainability of software. Traditional tools (e.g., SonarQube) struggle to capture socio-technical dimensions such as team collaboration patterns and code change frequency. Modern projects face challenges like code scale expansion, technical debt accumulation, and loss of core developers. This framework innovatively integrates three types of evidence sources: static code metrics (complexity, technical debt, etc., obtained via SonarQube), Git socio-technical metrics (change frequency, author concentration, etc.), and LLM evaluation (pure code judgment, Git context-enhanced judgment, etc.).

3

Section 03

Core Architecture and Workflow

LLI-Bench adopts a modular design and includes four workflows:

  1. Core Pipeline: Clone repository, SonarQube scan, Git metric extraction, LLM evaluation, data merging and validation;
  2. Comprehensive Evaluator: Integrate multi-source metrics to generate maintainability/sustainability judgments;
  3. Refactoring Research Module: Apply LLM refactoring, compare metric changes before and after, verify functional non-regression;
  4. Research Question Analysis: Generate paper result tables and visualizations to support reproducibility.
4

Section 04

Research Questions and Design

The framework is built around five core research questions:

  • RQ1: Consistency between LLM maintainability estimates and SonarQube baselines;
  • RQ2a: Consistency of evaluation results across different LLM models and stability of re-runs;
  • RQ2b: Impact of Git context on LLM judgments;
  • RQ3: Contribution weights of technical signals and socio-technical signals to comprehensive judgments;
  • RQ4: Validation of the effectiveness of LLM-guided refactoring.
5

Section 05

Technical Implementation Details

Data Output: Generates git_metrics.csv, sonar_metrics.csv, LLM evaluation results, merged datasets, validation reports, comprehensive evaluation results, refactoring comparison data, etc.; Configuration Management: Uses env.ps1/example and env.sh/example templates. Users fill in SonarQube address, LLM API key, etc. Local configurations are ignored by Git; Execution Commands: Modular calls (e.g., python -m pipeline.main to run the core pipeline, python -m experiments.refactoring to run refactoring research).

6

Section 06

Use Cases and Value

  • Academic Research: Supports large-scale empirical studies on maintainability, LLM code evaluation benchmarking, and quantitative analysis of socio-technical factors;
  • Enterprise Practice: Technical debt auditing, refactoring ROI evaluation, code review assistance, team health monitoring;
  • Open Source Ecosystem: Evaluate project maintainability trends, identify high-risk modules, and present evidence of quality improvement.
7

Section 07

Limitations and Outlook

Limitations: Dependence on external services (SonarQube, LLM API), high resource requirements for large-scale analysis, LLM evaluation costs, and results needing to be combined with manual judgment; Outlook: As LLM capabilities improve, the framework will become more accurate and practical. Its open-source release lays the foundation for exploration in academia and industry.