Zing Forum

Reading

metANN: An R Framework for Neural Network Training Fusing Metaheuristic Algorithms and Gradient Optimization

metANN is an R package that provides dual support for metaheuristic algorithms and gradient optimization algorithms for feedforward neural network training, covering regression, binary classification, and multi-class classification tasks, as well as supporting general continuous optimization problems.

R语言神经网络元启发式算法优化机器学习深度学习粒子群优化遗传算法Adam多层感知器
Published 2026-05-12 03:24Recent activity 2026-05-12 03:29Estimated read 7 min
metANN: An R Framework for Neural Network Training Fusing Metaheuristic Algorithms and Gradient Optimization
1

Section 01

metANN: An R Framework for Neural Network Training—Dual Support for Metaheuristics and Gradient Optimization

metANN is an R package designed specifically for feedforward neural network training, with its core feature being simultaneous support for metaheuristic algorithms and gradient optimization algorithms. It covers regression, binary classification, multi-class classification tasks, and general continuous optimization problems, providing users with flexible optimization strategy choices to adapt to different scenario needs.

2

Section 02

Background and Motivation: Limitations of Gradient Optimization and Advantages of Metaheuristic Algorithms

Traditional neural network training relies on gradient optimization methods such as SGD and Adam, but it has problems like being prone to falling into local optima, sensitivity to learning rates, and insufficient exploration of non-convex spaces. Metaheuristic algorithms (e.g., PSO, GA) do not depend on gradients and have strong global search capabilities. Therefore, metANN combines the two to provide a unified optimization framework for R users.

3

Section 03

Project Overview: Supported Optimizer Types

metANN is an R package for training feedforward MLPs, providing a flexible optimization interface. Supported metaheuristic optimizers include PSO, DE, GA, ABC, GWO, WOA, TLBO, and SBOA; gradient optimizers include SGD and Adam. This dual support allows users to choose based on problem characteristics: metaheuristics are suitable for scenarios where gradients are hard to compute or there are many local optima, while gradient optimization is suitable for large-scale data and deep networks.

4

Section 04

Detailed Core Functions: General Optimization and Neural Network Training

General Continuous Optimization: Numerical optimization is supported via the met_optimize() function. Metaheuristics only require an objective function, while gradient optimization needs an additional gradient function (e.g., tested with Sphere/Rastrigin functions). Neural Network Training: met_mlp() supports regression (numerical response), binary classification (Sigmoid + binary cross-entropy), and multi-class classification (Softmax + categorical cross-entropy), providing formula/matrix input methods. Activation and Loss Functions: Activation functions include linear, Sigmoid, tanh, ReLU, etc.; loss functions include MSE, MAE, Huber, cross-entropy, etc.

5

Section 05

Algorithm Principles and Innovation Points

Secretary Bird Optimization Algorithm (SBOA): A new algorithm in 2024, inspired by the hunting behavior of African secretary birds, balancing convergence speed and accuracy, suitable for high-dimensional problems. Gradient-Free Training: Metaheuristics directly optimize network weights, with strong global search capabilities, no need for backpropagation (suitable for scenarios like non-differentiable activation, discrete operations, etc.). Hybrid Strategy: Supports first global exploration via metaheuristics, then fine-tuning via gradient optimization—effect is better than a single method.

6

Section 06

Usage Examples and Practical Guide

Installation: remotes::install_github("burakdilber/metANN") + library(metANN). Regression Example: Predict petal width using the Iris dataset with the SBOA optimizer. Classification Example: Perform multi-class classification of Iris species using the Adam optimizer. Evaluation and Visualization: Supports multiple performance metrics; plot_network() displays the network architecture.

7

Section 07

Application Scenarios and Applicability Analysis

metANN is suitable for: small-scale datasets (metaheuristics are more stable), non-convex optimization problems (global search advantage), teaching and research (intuitive and easy to understand), and R ecosystem users (seamless experience). Limitations: Gradient optimization is more efficient for large-scale data and deep networks. Users can choose the optimization strategy based on actual situations.

8

Section 08

Summary and Outlook

metANN provides R users with a flexible neural network training tool that combines metaheuristics and gradient optimization. For practitioners, it is a plug-and-play optimization choice; for researchers, it is an algorithm comparison platform. In the future, it is expected to integrate more advanced optimization technologies and expand application scope. The choice of optimization method should be based on problem characteristics, not trends.