School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China.
Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States of America.
PLoS One. 2021 May 6;16(5):e0251204. doi: 10.1371/journal.pone.0251204. eCollection 2021.
Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.
政治优化器(PO)是一种相对先进的元启发式优化技术,适用于全局优化问题和现实世界中的工程优化,它模拟了人类社会中多阶段的政治过程。然而,由于选举阶段的贪婪策略以及党派转换阶段全局探索和局部开发之间的不平衡,它存在陷入局部最优解、收敛精度低的问题。为了克服这些缺点,通过将 PO 与二次插值、高级二次插值、立方插值、拉格朗日插值、牛顿插值和折射学习(RL)相结合,提出了一系列新的 PO 变体。这项工作的主要贡献如下。(1)采用插值策略帮助当前全局最优解跳出局部最优解。(2)具体来说,将 RL 集成到 PO 中以提高种群的多样性。(3)为了提高党派转换阶段的探索和开发之间的平衡能力,提出了一个逻辑模型来保持良好的平衡。据我们所知,这里首次提出将插值策略和 RL 与 PO 相结合。通过 19 个广泛使用的基准函数和 IEEE CEC 2014 的 30 个测试函数来评估最佳 PO 变体的性能。实验结果表明,该算法在探索能力方面具有优越的性能。