Ma Xiaoliang, Chen Qunjian, Yu Yanan, Sun Yiwen, Ma Lijia, Zhu Zexuan
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China.
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, Shenzhen, China.
Front Neurosci. 2020 Jan 14;13:1408. doi: 10.3389/fnins.2019.01408. eCollection 2019.
Different from conventional single-task optimization, the recently proposed multitasking optimization (MTO) simultaneously deals with multiple optimization tasks with different types of decision variables. MTO explores the underlying similarity and complementarity among the component tasks to improve the optimization process. The well-known multifactorial evolutionary algorithm (MFEA) has been successfully introduced to solve MTO problems based on transfer learning. However, it uses a simple and random inter-task transfer learning strategy, thereby resulting in slow convergence. To deal with this issue, this paper presents a two-level transfer learning (TLTL) algorithm, in which the upper-level implements inter-task transfer learning via chromosome crossover and elite individual learning, and the lower-level introduces intra-task transfer learning based on information transfer of decision variables for an across-dimension optimization. The proposed algorithm fully uses the correlation and similarity among the component tasks to improve the efficiency and effectiveness of MTO. Experimental studies demonstrate the proposed algorithm has outstanding ability of global search and fast convergence rate.
与传统的单任务优化不同,最近提出的多任务优化(MTO)同时处理具有不同类型决策变量的多个优化任务。MTO探索组件任务之间潜在的相似性和互补性,以改进优化过程。著名的多因素进化算法(MFEA)已被成功引入,基于迁移学习来解决MTO问题。然而,它使用一种简单且随机的任务间迁移学习策略,从而导致收敛速度缓慢。为解决这个问题,本文提出了一种两级迁移学习(TLTL)算法,其中上层通过染色体交叉和精英个体学习来实现任务间迁移学习,下层基于决策变量的信息传递引入任务内迁移学习以进行跨维度优化。所提出的算法充分利用组件任务之间的相关性和相似性,以提高MTO的效率和有效性。实验研究表明,所提出的算法具有出色的全局搜索能力和快速收敛速度。