Zing Forum

Reading

ReasonBench: An LLM Evaluation and Training Framework for Separating Memory and Reasoning Capabilities

An open-source framework for evaluating and enhancing the reasoning capabilities of large language models (LLMs), which explicitly separates memory extraction and logical reasoning processes using special tokens to help models better solve complex problems.

LLM推理能力链式思维模型评估机器学习AI研究微调开源框架
Published 2026-04-07 18:58Recent activity 2026-04-07 19:23Estimated read 6 min
ReasonBench: An LLM Evaluation and Training Framework for Separating Memory and Reasoning Capabilities
1

Section 01

Core Introduction to the ReasonBench Framework

ReasonBench is an open-source framework focused on evaluating and enhancing the reasoning capabilities of LLMs. Its core innovation lies in explicitly separating the "memory extraction" and "logical reasoning" processes using special tokens, addressing the problem that traditional Chain-of-Thought (CoT) methods cannot distinguish whether a model relies on memory or genuine reasoning. This helps researchers observe and improve model reasoning capabilities in a more granular way.

2

Section 02

Background: Pain Points of Traditional LLM Reasoning Evaluation

Although traditional CoT prompting methods can improve reasoning performance, they struggle to distinguish whether a model is performing logical reasoning or recalling similar patterns from training data. This makes it impossible to accurately diagnose the root cause of model errors and difficult to optimize reasoning capabilities in a targeted manner. ReasonBench is designed precisely to address this issue.

3

Section 03

Core Method: Explicit Separation of Memory and Reasoning

ReasonBench achieves the separation of cognitive processes using two special tokens:

  • <memory>: Extracts facts, numbers, or formulas from the problem; only performs information extraction without derivation;
  • <reason>: Conducts calculations and logical operations based on the facts from the memory stage to derive conclusions. This method helps: diagnose the root cause of errors (memory or reasoning stage), improve training strategies in a targeted way, and enhance the interpretability of the model's thinking process.
4

Section 04

Technical Workflow: Generation, Fine-tuning, and Evaluation

The ReasonBench workflow consists of three stages:

  1. CoT Data Generation: Use an LLM (vLLM or OpenAI API) to generate structured CoT steps annotated with special tokens; Qwen3.5-27B is used as the default teacher model;
  2. Model Fine-tuning: Supports standard supervised fine-tuning (LoRA) and multiple reinforcement learning methods (DPO, CPO, etc.);
  3. Evaluation: Flexibly supports local models, vLLM acceleration, or OpenAI API evaluation, and automatically adapts to model types. Example commands: Generate data ./run.sh --generate --dataset truthfulqa --mode train, fine-tune ./run.sh --train --dataset truthfulqa, evaluate ./run.sh --eval --model /path/to/checkpoint --dataset truthfulqa.
5

Section 05

Features and Extensions: Datasets and Configuration

ReasonBench includes multiple built-in reasoning benchmark datasets (e.g., GSM8K, MMLU-Pro, TruthfulQA) covering fields such as mathematics, common sense, and scientific Q&A. The configuration system uses a layered architecture:

  • conf/settings.yaml: Controls core settings like models and training hyperparameters;
  • conf/datasets.yaml: Defines dataset properties;
  • conf/tokens.py: Customizes CoT tokens and output formats (colon or closing tags). Adding a custom dataset requires only three steps: register it in datasets.yaml, implement the dataset class, and register it in the mapping table.
6

Section 06

Research Value and Open Source Community

ReasonBench provides multi-directional value for LLM research: accurately evaluating pure reasoning capabilities, locating the root cause of errors, supporting curriculum learning, fairly comparing model reasoning performance, and enhancing interpretability. Installation is simple: git clone https://github.com/metalearningnet/ReasonBench.git && cd ReasonBench && ./install.sh, supporting vLLM and OpenAI API backends. The project is open-sourced under the MIT license, encouraging the community to contribute new datasets, training methods, and evaluation metrics.