Zing Forum

Reading

ESSAM: An Optimization Method for Large Model Mathematical Reasoning Fusing Evolution Strategy and Sharpness-Aware Maximization

This article introduces ESSAM, a zeroth-order fine-tuning method combining Evolution Strategy (ES) and Sharpness-Aware Maximization (SAM), specifically designed to enhance the mathematical reasoning ability of large language models.

零阶优化进化策略SAM大语言模型数学推理微调ESsharpness-aware
Published 2026-04-10 17:38Recent activity 2026-04-10 17:44Estimated read 7 min
ESSAM: An Optimization Method for Large Model Mathematical Reasoning Fusing Evolution Strategy and Sharpness-Aware Maximization
1

Section 01

ESSAM Method Guide: Zeroth-Order Fine-Tuning Fusing ES and SAM to Enhance Large Model Mathematical Reasoning

This article introduces ESSAM, a zeroth-order fine-tuning method combining Evolution Strategy (ES) and Sharpness-Aware Maximization (SAM), specifically designed to enhance the mathematical reasoning ability of large language models. Traditional backpropagation-based fine-tuning has high computational pressure; while zeroth-order optimization is memory-efficient, it underperforms on complex tasks. By fusing the two technologies, ESSAM maintains the memory advantages of zeroth-order methods while improving optimization effectiveness.

2

Section 02

Background: The Rise and Challenges of Zeroth-Order Optimization

As the scale of large language models expands, traditional backpropagation fine-tuning requires storing and computing huge gradients, which demands extremely high memory and computing power. Zeroth-Order Optimization (ZOO) emerged as a solution—it estimates gradients only through forward propagation, significantly reducing memory overhead. However, zeroth-order methods often underperform gradient-based methods on complex tasks (such as mathematical reasoning). Balancing memory efficiency and optimization effectiveness is a key research focus.

3

Section 03

Core Innovation of ESSAM: Fusion Mechanism of ES and SAM

ESSAM combines Evolution Strategy (ES) and Sharpness-Aware Maximization (SAM). ES explores model behavior through random perturbations in the parameter space—no backpropagation needed, parallel-friendly, and highly robust. SAM focuses on the sharpness of the loss function, seeking flat optimal solutions that are friendly to generalization. The innovation of ESSAM lies in introducing SAM into the zeroth-order framework: in each iteration, it evaluates the performance of current parameters and the local geometric characteristics of the loss surface, guiding optimization toward flatter regions.

4

Section 04

Technical Mechanism of ESSAM: Four-Step Optimization Process

The ESSAM optimization process includes four steps: 1. Perturbation Sampling: Sample random perturbation vectors from a standard normal distribution; 2. Sharpness Estimation: Calculate the loss change in each perturbation direction to approximate Hessian information; 3. Direction Aggregation: Combine ES's exploration ability and SAM's sharpness-aware objective to form a comprehensive optimization direction; 4. Parameter Update: Update parameters along the aggregated direction with adaptive step size adjustment.

5

Section 05

Targeted Optimization of ESSAM for Mathematical Reasoning Tasks

Mathematical reasoning poses three major challenges for LLMs: multi-step reasoning chains are prone to errors, high requirements for symbolic operations, and low tolerance for inaccuracies. Through SAM's sharpness-aware mechanism, ESSAM enables the model to learn more stable reasoning strategies, making it less sensitive to minor input changes and improving the reliability of complex mathematical reasoning.

6

Section 06

Practical Value of Zeroth-Order Optimization: Application Scenarios of ESSAM

Zeroth-order optimization represented by ESSAM has important practical significance: 1. Reduces hardware thresholds—consumer-grade GPUs can fine-tune large models; 2. Feasible in privacy protection scenarios (e.g., some federated learning setups); 3. The only option for black-box optimization (model access via API).

7

Section 07

Limitations of ESSAM and Future Research Directions

Limitations of ESSAM: Low sample efficiency (requires more forward propagations), large variance in high-dimensional parameter spaces affecting convergence, and less mature theoretical guarantees compared to gradient-based methods. Future directions: Efficient perturbation sampling strategies, combining parameter-efficient fine-tuning methods like LoRA, and exploring applications in other complex reasoning tasks.

8

Section 08

Conclusion: Significance and Outlook of ESSAM

ESSAM is an important exploration direction for large model optimization. Against the backdrop of computing power constraints, balancing performance and training costs is a common challenge for the AI community. By fusing ES's exploration ability and SAM's pursuit of generalization, ESSAM opens up new possibilities for the application of zeroth-order optimization in complex reasoning tasks and is worthy of attention.