Zhang Yanhua, Adegboye Oluwatayomi Rereloluwa, Feda Afi Kekeli, Agyekum Ephraim Bonah, Kumar Pankaj
Department of Physics and Electronic Engineering, Yuncheng University, Yuncheng City, Shanxi Province, China.
University of Mediterranean Karpasia, Mersin-10, Northern Cyprus, TR-10, Mersin, Turkey.
Sci Rep. 2025 May 6;15(1):15779. doi: 10.1038/s41598-025-00076-5.
The Dynamic Gold Rush Optimizer (DGRO) is presented as an advanced variant of the original Gold Rush Optimizer (GRO), addressing its inherent limitations in exploration and exploitation. While GRO has demonstrated efficacy in solving optimization problems, its susceptibility to premature convergence and suboptimal solutions remains a critical challenge. To overcome these limitations, DGRO introduces two novel mechanisms: the Salp Navigation Mechanism (SNM) and the Worker Adaptation Mechanism (WAM). The SNM enhances both exploration and exploitation by dynamically guiding the population through a stochastic strategy that ensures effective navigation of the solution space. This mechanism also facilitates a smooth transition between exploration and exploitation, enabling the algorithm to maintain diversity during early iterations and refine solutions in later stages. Complementing this, the WAM strengthens the exploration phase by promoting localized interactions among individuals within the population, fostering adaptive learning of promising search regions. Together, these mechanisms significantly improve DGRO's ability to converge toward global optima. A comprehensive experimental evaluation was conducted using benchmark functions from the Congress on Evolutionary Computation (CEC) CEC2013 and CEC2020 test suites across 30 and 50-dimensional spaces, alongside seven complex engineering optimization problems. Statistical analyses, including the Wilcoxon Rank-Sum Test (WRST) and Friedman Rank Test (FRT), validate DGRO's superior performance, demonstrating significant advancements in optimization capability and stability. These findings underscore the effectiveness of DGRO as a competitive and robust optimization algorithm.
动态淘金优化器(DGRO)是作为原始淘金优化器(GRO)的高级变体提出的,旨在解决其在探索和利用方面的固有局限性。虽然GRO在解决优化问题方面已证明有效,但其易陷入早熟收敛和次优解的问题仍然是一个关键挑战。为克服这些局限性,DGRO引入了两种新颖的机制:沙蚕导航机制(SNM)和工人适应机制(WAM)。SNM通过一种随机策略动态引导种群,增强了探索和利用能力,确保了对解空间的有效导航。该机制还促进了探索和利用之间的平稳过渡,使算法在早期迭代中保持多样性,并在后期阶段优化解。与此相辅相成的是,WAM通过促进种群中个体之间的局部交互来加强探索阶段,促进对有前景搜索区域的自适应学习。这些机制共同显著提高了DGRO收敛到全局最优解的能力。使用来自进化计算大会(CEC)2013年和2020年测试套件的基准函数,在30维和50维空间以及七个复杂工程优化问题上进行了全面的实验评估。包括威尔科克森秩和检验(WRST)和弗里德曼秩检验(FRT)在内的统计分析验证了DGRO的卓越性能,证明了其在优化能力和稳定性方面的显著进步。这些发现强调了DGRO作为一种具有竞争力和鲁棒性的优化算法的有效性。