Chen Ying, Yu Jiong, Zhao Yutong, Chen Jiaying, Du Xusheng
School of Software, Xinjiang University, Urumqi 830008, China.
College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China.
Entropy (Basel). 2022 Mar 21;24(3):432. doi: 10.3390/e24030432.
In most of the existing multi-task learning (MTL) models, multiple tasks' public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks' private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets' sparsity and tasks' correlations to the model performance is analyzed. Results show that the proposed model's information sharing strategy is helpful to transfer learning and superior to the several comparison models.
在大多数现有的多任务学习(MTL)模型中,多个任务的公共信息是通过在隐藏层之间共享参数来学习的,如硬共享、软共享和层次共享。一种有前景的方法是将模型剪枝引入信息学习,如稀疏共享,它在知识转移方面表现出色。然而,上述方法在冲突任务中效率低下,对任务的私有信息学习不足,或者会遭受负迁移。在本文中,我们提出了一种多任务学习模型(基于剪枝的特征共享,PBFS),该模型将软参数共享结构与模型剪枝相结合,并在不同的特定任务子网之间添加了一个可剪枝的共享网络。通过这种方式,每个任务可以根据自身需求在共享子网中选择参数。我们在三个基准公共数据集和一个合成数据集上进行了实验;分析了不同子网的稀疏性和任务相关性对模型性能的影响。结果表明,所提出模型的信息共享策略有助于迁移学习,并且优于几个对比模型。