Zing Forum

Reading

RASPRef: Retrieval-Augmented Self-Supervised Prompt Optimization Framework to Enhance Large Model Reasoning Capabilities

RASPRef iteratively optimizes prompts by retrieving relevant examples and historical reasoning trajectories, leveraging multi-sample consistency, validator feedback, and model self-criticism signals. It significantly improves the mathematical reasoning performance of reasoning models without manual annotation.

RASPRef提示优化推理模型自监督学习检索增强数学推理DeepSeek链式思维
Published 2026-03-28 05:49Recent activity 2026-03-31 10:56Estimated read 7 min
RASPRef: Retrieval-Augmented Self-Supervised Prompt Optimization Framework to Enhance Large Model Reasoning Capabilities
1

Section 01

RASPRef Framework: Retrieval-Augmented Self-Supervised Prompt Optimization to Enhance Large Model Reasoning Capabilities

This article proposes RASPRef (Retrieval-Augmented Self-Supervised Prompt Optimization Framework) to address the prompt sensitivity challenge of reasoning models. By retrieving relevant examples and historical reasoning trajectories, it iteratively optimizes prompts using multi-sample consistency, validator feedback, and model self-criticism signals. It significantly improves mathematical reasoning performance without manual annotation. This framework solves the time-consuming issue of manual prompt engineering and the high annotation cost dependency of existing methods, providing a new solution for the practical application of reasoning models.

2

Section 02

Prompt Sensitivity of Reasoning Models and Limitations of Existing Methods

In recent years, reasoning models like DeepSeek R1 and OpenAI o1 have performed well in structured reasoning tasks, but they are highly sensitive to prompt wording. Manual prompt engineering involves iterative manual work, which is time-consuming, labor-intensive, and hard to scale. Existing prompt optimization methods rely on manual annotation or task-specific supervision signals, leading to high costs and poor generalization. Therefore, developing self-supervised prompt optimization methods without manual annotation is of great value.

3

Section 03

Core Components and Technical Implementation of the RASPRef Framework

RASPRef consists of three core components:

  1. Retrieval Module: Retrieve top-k relevant examples from the knowledge base (problem-solution pairs, historical reasoning trajectories, success/failure cases) based on semantic similarity;
  2. Signal Generation Module: Generate three self-supervised signals—multi-sample consistency (answer consistency across multiple sampled trajectories), validator feedback (programmatic check of reasoning correctness), and model self-criticism (model evaluates its own reasoning flaws);
  3. Prompt Optimization Module: Represent prompts as instruction templates, iteratively update prompts via a meta-model, and combine retrieved examples and feedback signals to improve prompt quality.
4

Section 04

Experimental Evaluation Results of the RASPRef Framework

On the GSM8K mathematical reasoning task, RASPRef-optimized prompts significantly improved model performance:

  • DeepSeek-R1-Distill-Qwen-7B achieved an accuracy of 92.3%, which is 4 percentage points higher than the base prompt and 2 percentage points higher than the chain-of-thought prompt;
  • DeepSeek-R1-Distill-Qwen-32B reached 94.1%, close to the best level;
  • Ablation study shows: removing the retrieval module reduced performance by 2%, single signals had poor effects, and performance saturated after 5 iterations.
5

Section 05

Key Factors Affecting RASPRef Optimization Effectiveness

The core factors affecting RASPRef's effectiveness include:

  1. Retrieval Quality: High-relevance examples (e.g., top-3 cosine similarity) had significant optimization effects, while random examples performed poorly;
  2. Trajectory Selection: Diverse trajectories (including success/failure cases) performed better than only success cases;
  3. Signal Quality: Validator feedback was the most reliable; multi-sample consistency depended on model confidence; self-criticism signal accuracy depended on the model's own capabilities.
6

Section 06

Implications of the RASPRef Framework for Reasoning Model Applications

The RASPRef study brings three implications:

  1. Prompt design remains a key factor in reasoning model performance, and resources should be invested in optimization;
  2. Self-supervised prompt optimization is practical and scalable, suitable for reasoning tasks with verification mechanisms;
  3. Retrieval augmentation improves prompt optimization effectiveness, and high-quality knowledge bases are valuable for long-term systems.
7

Section 07

Limitations of RASPRef and Future Research Directions

RASPRef has limitations: it was only evaluated on mathematical reasoning tasks, relies on verifiable tasks, and has high optimization computation costs. Future directions:

  1. Explore synergies with model fine-tuning;
  2. Expand to multi-modal reasoning scenarios;
  3. Develop more efficient optimization algorithms to reduce computation costs.