IEEE Trans Cybern. 2021 Mar;51(3):1175-1188. doi: 10.1109/TCYB.2020.2977956. Epub 2021 Feb 17.
Large-scale optimization has become a significant and challenging research topic in the evolutionary computation (EC) community. Although many improved EC algorithms have been proposed for large-scale optimization, the slow convergence in the huge search space and the trap into local optima among massive suboptima are still the challenges. Targeted to these two issues, this article proposes an adaptive granularity learning distributed particle swarm optimization (AGLDPSO) with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). In AGLDPSO, a master-slave multisubpopulation distributed model is adopted, where the entire population is divided into multiple subpopulations, and these subpopulations are co-evolved. Compared with other large-scale optimization algorithms with single population evolution or centralized mechanism, the multisubpopulation distributed co-evolution mechanism will fully exchange the evolutionary information among different subpopulations to further enhance the population diversity. Furthermore, we propose an adaptive granularity learning strategy (AGLS) based on LSH and LR. The AGLS is helpful to determine an appropriate subpopulation size to control the learning granularity of the distributed subpopulations in different evolutionary states to balance the exploration ability for escaping from massive suboptima and the exploitation ability for converging in the huge search space. The experimental results show that AGLDPSO performs better than or at least comparable with some other state-of-the-art large-scale optimization algorithms, even the winner of the competition on large-scale optimization, on all the 35 benchmark functions from both IEEE Congress on Evolutionary Computation (IEEE CEC2010) and IEEE CEC2013 large-scale optimization test suites.
大规模优化已成为进化计算(EC)领域的一个重要且具有挑战性的研究课题。尽管已经提出了许多改进的 EC 算法来进行大规模优化,但在巨大的搜索空间中收敛缓慢以及在大量次优解中陷入局部最优的问题仍然存在挑战。针对这两个问题,本文提出了一种基于机器学习技术的自适应粒度学习分布式粒子群优化算法(AGLDPSO),包括基于局部敏感哈希(LSH)的聚类分析和基于逻辑回归(LR)的自适应粒度控制。在 AGLDPSO 中,采用了主从多子群分布式模型,即将整个种群划分为多个子群,并对子群进行协同进化。与其他具有单种群进化或集中机制的大规模优化算法相比,多子群分布式协同进化机制将充分交换不同子群之间的进化信息,进一步增强种群多样性。此外,我们提出了一种基于 LSH 和 LR 的自适应粒度学习策略(AGLS)。AGLS 有助于确定适当的子群大小,以控制不同进化状态下分布式子群的学习粒度,从而平衡从大量次优解中逃脱的探索能力和在巨大搜索空间中收敛的开发能力。实验结果表明,AGLDPSO 在所有 35 个基准函数上的性能均优于或至少可与其他一些最先进的大规模优化算法相媲美,即使是在 IEEE 进化计算大会(IEEE CEC2010)和 IEEE CEC2013 大规模优化测试套件中的大规模优化竞赛的获胜者。