Yang Qiang, Chen Wei-Neng, Gu Tianlong, Jin Hu, Mao Wentao, Zhang Jun
IEEE Trans Cybern. 2022 Mar;52(3):1960-1976. doi: 10.1109/TCYB.2020.3034427. Epub 2022 Mar 11.
High-dimensional problems are ubiquitous in many fields, yet still remain challenging to be solved. To tackle such problems with high effectiveness and efficiency, this article proposes a simple yet efficient stochastic dominant learning swarm optimizer. Particularly, this optimizer not only compromises swarm diversity and convergence speed properly, but also consumes as little computing time and space as possible to locate the optima. In this optimizer, a particle is updated only when its two exemplars randomly selected from the current swarm are its dominators. In this way, each particle has an implicit probability to directly enter the next generation, making it possible to maintain high swarm diversity. Since each updated particle only learns from its dominators, good convergence is likely to be achieved. To alleviate the sensitivity of this optimizer to newly introduced parameters, an adaptive parameter adjustment strategy is further designed based on the evolutionary information of particles at the individual level. Finally, extensive experiments on two high dimensional benchmark sets substantiate that the devised optimizer achieves competitive or even better performance in terms of solution quality, convergence speed, scalability, and computational cost, compared to several state-of-the-art methods. In particular, experimental results show that the proposed optimizer performs excellently on partially separable problems, especially partially separable multimodal problems, which are very common in real-world applications. In addition, the application to feature selection problems further demonstrates the effectiveness of this optimizer in tackling real-world problems.
高维问题在许多领域普遍存在,但仍然难以解决。为了高效地解决此类问题,本文提出了一种简单而有效的随机优势学习群体优化器。特别地,该优化器不仅能恰当地兼顾群体多样性和收敛速度,还能尽可能少地消耗计算时间和空间来找到最优解。在这个优化器中,只有当从当前群体中随机选择的两个范例是其支配者时,粒子才会被更新。通过这种方式,每个粒子都有直接进入下一代的隐含概率,从而有可能保持较高的群体多样性。由于每个更新后的粒子只向其支配者学习,因此很可能实现良好的收敛。为了减轻该优化器对新引入参数的敏感性,基于粒子在个体层面的进化信息进一步设计了一种自适应参数调整策略。最后,在两个高维基准集上进行的大量实验证实,与几种最先进的方法相比,所设计的优化器在解质量、收敛速度、可扩展性和计算成本方面实现了有竞争力甚至更好的性能。特别是,实验结果表明,所提出的优化器在部分可分离问题上表现出色,尤其是在实际应用中非常常见的部分可分离多模态问题上。此外,在特征选择问题上的应用进一步证明了该优化器在解决实际问题方面的有效性。