Zing Forum

Reading

Automatically Learning Mutation Strategies for Differential Evolution Algorithms Using Large Language Models

Open-source project of a GECCO 2026 paper, exploring the use of large language models to automatically design and optimize mutation strategies for differential evolution algorithms, enabling automation of algorithm design

差分进化大语言模型自动算法设计进化计算优化算法机器学习代码生成元启发式算法
Published 2026-03-31 11:44Recent activity 2026-03-31 11:55Estimated read 7 min
Automatically Learning Mutation Strategies for Differential Evolution Algorithms Using Large Language Models
1

Section 01

Introduction: Automatically Optimizing Mutation Strategies for Differential Evolution Algorithms Using Large Language Models

The performance of Differential Evolution (DE) algorithms is highly dependent on mutation strategies, but traditional strategy design relies on expert experience and is difficult to adapt to different problems. This open-source project of a GECCO 2026 paper explores the use of Large Language Models (LLMs) to automatically design and optimize DE's mutation strategies. Through a performance-driven learning framework, it achieves automation of algorithm design, reduces manual effort, and may discover innovative strategies. The project is open-source, allowing for experiment reproduction or application to one's own problems.

2

Section 02

Background: Mutation Strategies of Differential Evolution Algorithms and Limitations of Traditional Design

Differential Evolution is a classic evolutionary computation method, with its core lying in the mutation mechanism—generating mutation vectors through other individuals in the population. Classic strategies include DE/rand/1 (strong global search, suitable for multimodal functions) and DE/best/1 (fast convergence, suitable for unimodal functions), etc. Traditional strategy design has limitations: it requires deep domain knowledge, manual parameter tuning is time-consuming and error-prone, fixed strategies are difficult to adapt to dynamic problem characteristics; existing automatic design methods are mostly limited to predefined strategy sets and lack innovation capabilities.

3

Section 03

Methodology: Performance-Driven LLM Automatic Strategy Learning Framework

The core of the framework is to enable LLMs to learn and improve mutation strategies during the optimization process, consisting of three main components: 1. Strategy Generation Module: The LLM generates executable mutation strategy code based on problem descriptions, population states, and historical performance; 2. Performance Evaluation and Feedback: The strategy is applied to optimization tasks, with indicators such as convergence speed and solution quality evaluated and fed back to the LLM; 3. Strategy Library Management: Stores excellent strategies, establishes mappings between strategies and problem characteristics through meta-learning, and assists in generating new strategies.

4

Section 04

Experimental Validation: Effectiveness and Application Scenarios of Automatic Strategies

The project validated the effectiveness of automatic strategies using the CEC benchmark test suite: it performed well on unimodal, multimodal, hybrid, and composite functions, especially in multimodal functions where it could find better global solutions (strong exploration capability). In machine learning hyperparameter optimization scenarios, automatic strategies can dynamically adapt to the hyperparameter space characteristics of different models, with better performance than fixed strategies.

5

Section 05

Conclusions and Insights: New Directions for Automatic Algorithm Design

This project provides important insights for the field of automatic algorithm design: 1. LLMs can act as algorithm component generators (not just selectors) and create entirely new strategies; 2. Performance-driven closed-loop learning is effective, similar to the trial-and-error process of human experts; 3. Code generation capabilities provide fine-grained flexibility, supporting strategy innovation. This method reduces manual effort and may discover strategies that are difficult for humans to think of.

6

Section 06

Limitations and Future Directions

Current limitations: There is time overhead in strategy generation and execution, limiting application in real-time scenarios; LLM inference costs are high, which may become a bottleneck for large-scale optimization. Future directions: Improve strategy generation efficiency (cache common strategies, lightweight models); explore multi-task learning to transfer experience; extend to other evolutionary algorithms and optimization methods.

7

Section 07

Open-Source Resources and Usage Guide

The project has been open-sourced on GitHub (URL: https://github.com/ML-LLM-Projects/Learning-Differential-Evolution-Mutation-Strategies-via-Performance-Driven-Large-Language-M), including complete implementation code, experiment scripts, sample data, and detailed documentation tutorials. Usage process: Define the optimization problem → Configure learning parameters → Run the automatic strategy learning loop → Evaluate the generated strategies.