Xia Yiqiang, Ji Yanzhe
College of Science, Liaoning Technical University, Fuxin, 123000, China.
Institute for Optimization and Decision Analytics, Liaoning Technical University, Fuxin, 123000, China.
Sci Rep. 2025 Jul 1;15(1):21692. doi: 10.1038/s41598-025-01678-9.
Over the past few years, numerous swarm intelligence-based metaheuristic algorithms have been introduced and extensively applied. Although these algorithms draw on biological behaviors, their similar heuristic paradigms and modular designs lead to unbalanced exploration and exploitation in complex optimization problems. Metaheuristic algorithms that combine mathematical properties with stochastic search processes can help break through the traditional evolutionary paradigm and enhance individual optimization. In pursuit of this goal, this study introduces an innovative meta-heuristic algorithm grounded in mathematics, called the Adam Gradient Descent Optimizer (AGDO), designed for addressing continuous optimization and engineering challenges. AGDO is inspired by the Adam optimizer and explores the entire search process using three rules: progressive gradient momentum integration, dynamic gradient interaction system, and system optimization operator. The progressive gradient momentum integration and dynamic gradient interaction system balance exploration and exploitation well, while the system optimization operator refines the exploitation aspect. AGDO's performance, in conjunction with several well-known and newly introduced metaheuristics, is assessed on the CEC2017 benchmarks across various dimensions and six practical engineering challenges. The Wilcoxon rank-sum test confirms its efficacy. The findings from the experiment indicate that AGDO demonstrates strong performance across four dimensions-10, 30, 50, and 100-when compared to 19 other algorithms, and it achieves the highest Wilcoxon rank-sum test scores in three of these dimensions. AGDO is also compared to six SOTA algorithms, the findings show that the algorithm maintains an excellent equilibrium between exploration and exploitation, converges rapidly, and successfully evades local optima, highlighting superior optimization performance. Moreover, AGDO demonstrates significant effectiveness and excellence in addressing intricate real-life challenges. Notably, AGDO demonstrated extreme strengths in Distributed Permutation Flow Shop Scheduling Problem (DPFSP). Source codes of AGDO are publicly available at https://www.mathworks.com/matlabcentral/fileexchange/180348-agdo .
在过去几年中,大量基于群体智能的元启发式算法被提出并广泛应用。尽管这些算法借鉴了生物行为,但它们相似的启发式范式和模块化设计导致在复杂优化问题中探索和利用的不平衡。将数学性质与随机搜索过程相结合的元启发式算法有助于突破传统进化范式并增强个体优化。为了实现这一目标,本研究引入了一种基于数学的创新元启发式算法,称为亚当梯度下降优化器(AGDO),旨在解决连续优化和工程挑战。AGDO受到亚当优化器的启发,使用三条规则探索整个搜索过程:渐进梯度动量积分、动态梯度交互系统和系统优化算子。渐进梯度动量积分和动态梯度交互系统很好地平衡了探索和利用,而系统优化算子则优化了利用方面。结合几种著名的和新引入的元启发式算法,在CEC2017基准测试的各个维度以及六个实际工程挑战上评估了AGDO的性能。威尔科克森秩和检验证实了其有效性。实验结果表明,与其他19种算法相比,AGDO在10、30、50和100这四个维度上表现出强大的性能,并且在其中三个维度上获得了最高的威尔科克森秩和检验分数。还将AGDO与六种最优算法进行了比较,结果表明该算法在探索和利用之间保持了出色的平衡,收敛迅速,并成功避免了局部最优,突出了卓越的优化性能。此外,AGDO在解决复杂的实际挑战方面表现出显著的有效性和卓越性。值得注意的是,AGDO在分布式置换流水车间调度问题(DPFSP)中表现出极强的优势。AGDO的源代码可在https://www.mathworks.com/matlabcentral/fileexchange/180348-agdo上公开获取。