Zing Forum

Reading

GRPO Reasoning Fine-tuning: Enhancing Mathematical Reasoning Capabilities of Small Models via Group Relative Policy Optimization

This project uses the GRPO (Group Relative Policy Optimization) method to fine-tune the SmolLM2-135M small model, optimizing both reasoning accuracy and structured output format simultaneously on the GSM8K mathematical dataset through a multi-objective reward system.

GRPO强化学习数学推理小模型GSM8K微调DeepSeek结构化输出开源实现
Published 2026-04-01 20:35Recent activity 2026-04-01 20:51Estimated read 5 min
GRPO Reasoning Fine-tuning: Enhancing Mathematical Reasoning Capabilities of Small Models via Group Relative Policy Optimization
1

Section 01

Introduction / Main Floor: GRPO Reasoning Fine-tuning: Enhancing Mathematical Reasoning Capabilities of Small Models via Group Relative Policy Optimization

This project uses the GRPO (Group Relative Policy Optimization) method to fine-tune the SmolLM2-135M small model, optimizing both reasoning accuracy and structured output format simultaneously on the GSM8K mathematical dataset through a multi-objective reward system.

2

Section 02

Challenges in Small Model Reasoning Capabilities

The performance of large language models on mathematical reasoning tasks has always been an important indicator of their intelligence level. However, strong reasoning capabilities seem to be closely related to model size—closed-source large models like GPT-4 and Claude perform excellently, but open-source small models (with fewer than 1B parameters) often struggle with mathematical reasoning.

Does this mean small models are destined to make no progress in reasoning tasks? The GRPO reasoning fine-tuning project gives an exciting answer: through clever training methods, even a small model with only 135 million parameters can achieve significant progress in mathematical reasoning.

3

Section 03

GRPO: Group Relative Policy Optimization

GRPO (Group Relative Policy Optimization) is a reinforcement learning method that was first applied on a large scale by the DeepSeek team in the training of their R1 model and has attracted widespread attention. Compared with the traditional PPO (Proximal Policy Optimization), GRPO has several unique advantages:

4

Section 04

No Need for a Value Model

PPO usually requires a separate value model (critic) to estimate state value, which increases training complexity and memory overhead. GRPO calculates advantages through relative comparisons within the same group of samples, without the need for an additional value model.

5

Section 05

Intra-group Relative Advantage Calculation

The core idea of GRPO is: for the same problem, let the model generate multiple answers, then calculate the advantage of each answer based on the relative quality of this group of answers. Answers that perform better than the group average get positive advantages, and vice versa get negative advantages.

6

Section 06

Training Stability

Since the advantage calculation is based on intra-group relative comparisons rather than absolute reward values, GRPO is less sensitive to the scale of the reward function, making the training process more stable.

7

Section 07

Project Implementation Details

This project demonstrates how to apply GRPO for reasoning fine-tuning in a resource-constrained environment:

8

Section 08

Choice of Base Model

HuggingFace's SmolLM2-135M-Instruct is selected as the base model. This is a small language model with only 135 million parameters, suitable for training on a single consumer-grade GPU (8GB+ VRAM). The intention of choosing a small model is clear—to prove that the effectiveness of the GRPO method does not depend on model size.