Wang Xia, Zhao Wei, Tang Jia-Ning, Dai Zhong-Bin, Feng Ya-Ning
School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China.
Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China.
Sci Rep. 2025 Mar 18;15(1):9267. doi: 10.1038/s41598-025-91245-z.
Large-scale sparse multi-objective optimization problems are prevalent in numerous real-world scenarios, such as neural network training, sparse regression, pattern mining and critical node detection, where Pareto optimal solutions exhibit sparse characteristics. Ordinary large-scale multi-objective optimization algorithms implement undifferentiated update operations on all decision variables, which reduces search efficiency, so the Pareto solutions obtained by the algorithms fail to meet the sparsity requirements. SparseEA is capable of generating sparse solutions and calculating scores for each decision variable, which serves as a basis for crossover and mutation in subsequent evolutionary process. However, the scores remain unchanged in iterative process, which restricts the sparse optimization ability of the algorithm. To solve the problem, this paper proposes an evolution algorithm with the adaptive genetic operator and dynamic scoring mechanism for large-scale sparse many-objective optimization (SparseEA-AGDS). Within the evolutionary algorithm for large-scale Sparse (SparseEA) framework, the proposed adaptive genetic operator and dynamic scoring mechanism adaptively adjust the probability of cross-mutation operations based on the fluctuating non-dominated layer levels of individuals, concurrently updating the scores of decision variables to encourage superior individuals to gain additional genetic opportunities. Moreover, to augment the algorithm's capability to handle many-objective problems, a reference point-based environmental selection strategy is incorporated. Comparative experimental results demonstrate that the SparseEA-AGDS algorithm outperforms five other algorithms in terms of convergence and diversity on the SMOP benchmark problem set with many-objective and also yields superior sparse Pareto optimal solutions.
大规模稀疏多目标优化问题在众多现实场景中普遍存在,如神经网络训练、稀疏回归、模式挖掘和关键节点检测等,其中帕累托最优解呈现出稀疏特征。普通的大规模多目标优化算法对所有决策变量执行无差别的更新操作,这降低了搜索效率,因此算法获得的帕累托解无法满足稀疏性要求。SparseEA能够生成稀疏解并为每个决策变量计算得分,这为后续进化过程中的交叉和变异提供了依据。然而,得分在迭代过程中保持不变,这限制了算法的稀疏优化能力。为了解决这个问题,本文提出了一种用于大规模稀疏多目标优化的具有自适应遗传算子和动态评分机制的进化算法(SparseEA-AGDS)。在大规模稀疏进化算法(SparseEA)框架内,所提出的自适应遗传算子和动态评分机制根据个体波动的非支配层水平自适应地调整交叉变异操作的概率,同时更新决策变量的得分,以鼓励优秀个体获得额外的遗传机会。此外,为了增强算法处理多目标问题的能力,引入了基于参考点的环境选择策略。对比实验结果表明,SparseEA-AGDS算法在多目标的SMOP基准问题集上的收敛性和多样性方面优于其他五种算法,并且还产生了更优的稀疏帕累托最优解。