College of Computer Science and Technology, Harbin Engineering University, No.145 Nantong Street, Harbin 150001, China.
Sensors (Basel). 2019 Sep 16;19(18):3992. doi: 10.3390/s19183992.
Transfer learning can enhance classification performance of a target domain with insufficient training data by utilizing knowledge relating to the target domain from source domain. Nowadays, it is common to see two or more source domains available for knowledge transfer, which can improve performance of learning tasks in the target domain. However, the classification performance of the target domain decreases due to mismatching of probability distribution. Recent studies have shown that deep learning can build deep structures by extracting more effective features to resist the mismatching. In this paper, we propose a new multi-source deep transfer neural network algorithm, MultiDTNN, based on convolutional neural network and multi-source transfer learning. In MultiDTNN, joint probability distribution adaptation (JPDA) is used for reducing the mismatching between source and target domains to enhance features transferability of the source domain in deep neural networks. Then, the convolutional neural network is trained by utilizing the datasets of each source and target domain to obtain a set of classifiers. Finally, the designed selection strategy selects classifier with the smallest classification error on the target domain from the set to assemble the MultiDTNN framework. The effectiveness of the proposed MultiDTNN is verified by comparing it with other state-of-the-art deep transfer learning on three datasets.
迁移学习可以通过利用目标域相关的源域知识来提高目标域中数据不足的分类性能。如今,通常可以看到有两个或更多的源域可供知识转移,这可以提高目标域学习任务的性能。然而,由于概率分布不匹配,目标域的分类性能会下降。最近的研究表明,深度学习可以通过提取更有效的特征来构建深层结构,从而抵抗不匹配。在本文中,我们提出了一种新的基于卷积神经网络和多源迁移学习的多源深度迁移神经网络算法 MultiDTNN。在 MultiDTNN 中,联合概率分布自适应(JPDA)用于减少源域和目标域之间的不匹配,以增强源域在深度神经网络中特征的可转移性。然后,利用每个源域和目标域的数据集来训练卷积神经网络,以获得一组分类器。最后,设计的选择策略从该集中选择目标域上分类错误最小的分类器来组装 MultiDTNN 框架。通过将 MultiDTNN 与三个数据集上的其他最先进的深度迁移学习方法进行比较,验证了所提出的 MultiDTNN 的有效性。