Suppr超能文献

用于求解全局优化问题和工程问题的自适应动态自学习灰狼优化算法。

Adaptive dynamic self-learning grey wolf optimization algorithm for solving global optimization problems and engineering problems.

作者信息

Zhang Yijie, Cai Yuhang

机构信息

School of Artificial Intelligence and Computer Science, Jiangnan University, WuXi 214122, China.

出版信息

Math Biosci Eng. 2024 Feb 21;21(3):3910-3943. doi: 10.3934/mbe.2024174.

Abstract

The grey wolf optimization algorithm (GWO) is a new metaheuristic algorithm. The GWO has the advantages of simple structure, few parameters to adjust, and high efficiency, and has been applied in various optimization problems. However, the orginal GWO search process is guided entirely by the best three wolves, resulting in low population diversity, susceptibility to local optima, slow convergence rate, and imbalance in development and exploration. In order to address these shortcomings, this paper proposes an adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO). First, the convergence factor was segmented and nonlinearized to balance the global search and local search of the algorithm and improve the convergence rate. Second, the wolves in the original GWO approach the leader in a straight line, which is too simple and ignores a lot of information on the path. Therefore, a dynamic logarithmic spiral that nonlinearly decreases with the number of iterations was introduced to expand the search range of the algorithm in the early stage and enhance local development in the later stage. Then, the fixed step size in the original GWO can lead to algorithm oscillations and an inability to escape local optima. A dynamic self-learning step size was designed to help the algorithm escape from local optima and prevent oscillations by reasonably learning the current evolution success rate and iteration count. Finally, the original GWO has low population diversity, which makes the algorithm highly susceptible to becoming trapped in local optima. A novel position update strategy was proposed, using the global optimum and randomly generated positions as learning samples, and dynamically controlling the influence of learning samples to increase population diversity and avoid premature convergence of the algorithm. Through comparison with traditional algorithms, such as GWO, PSO, WOA, and the new variant algorithms EOGWO and SOGWO on 23 classical test functions, ASGWO can effectively improve the convergence accuracy and convergence speed, and has a strong ability to escape from local optima. In addition, ASGWO also has good performance in engineering problems (gear train problem, ressure vessel problem, car crashworthiness problem) and feature selection.

摘要

灰狼优化算法(GWO)是一种新型元启发式算法。GWO具有结构简单、需调整的参数少以及效率高的优点,已被应用于各种优化问题。然而,原始的GWO搜索过程完全由最优的三只狼引导,导致种群多样性低、易陷入局部最优、收敛速度慢以及开发和探索不平衡。为了解决这些缺点,本文提出了一种自适应动态自学习灰狼优化算法(ASGWO)。首先,对收敛因子进行分段和非线性化处理,以平衡算法的全局搜索和局部搜索并提高收敛速度。其次,原始GWO中的狼以直线方式接近领导者,这种方式过于简单,忽略了路径上的许多信息。因此,引入了一种随迭代次数非线性递减的动态对数螺旋线,以在算法早期扩大搜索范围,并在后期增强局部开发能力。然后,原始GWO中的固定步长会导致算法振荡且无法逃离局部最优。设计了一种动态自学习步长,通过合理学习当前的进化成功率和迭代次数,帮助算法逃离局部最优并防止振荡。最后,原始GWO的种群多样性低,这使得算法极易陷入局部最优。提出了一种新颖的位置更新策略,使用全局最优位置和随机生成的位置作为学习样本,并动态控制学习样本的影响,以增加种群多样性并避免算法过早收敛。通过在23个经典测试函数上与传统算法(如GWO、PSO、WOA以及新的变体算法EOGWO和SOGWO)进行比较,ASGWO能够有效提高收敛精度和收敛速度,并且具有很强的逃离局部最优的能力。此外,ASGWO在工程问题(齿轮系问题、压力容器问题、汽车碰撞安全性问题)和特征选择方面也具有良好的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验