Li Keqiuyin, Lu Jie, Zuo Hua, Zhang Guangquan
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5293-5307. doi: 10.1109/TNNLS.2021.3069982. Epub 2022 Oct 5.
Transfer learning becomes an attractive technology to tackle a task from a target domain by leveraging previously acquired knowledge from a similar domain (source domain). Many existing transfer learning methods focus on learning one discriminator with single-source domain. Sometimes, knowledge from single-source domain might not be enough for predicting the target task. Thus, multiple source domains carrying richer transferable information are considered to complete the target task. Although there are some previous studies dealing with multi-source domain adaptation, these methods commonly combine source predictions by averaging source performances. Different source domains contain different transferable information; they may contribute differently to a target domain compared with each other. Hence, the source contribution should be taken into account when predicting a target task. In this article, we propose a novel multi-source contribution learning method for domain adaptation (MSCLDA). As proposed, the similarities and diversities of domains are learned simultaneously by extracting multi-view features. One view represents common features (similarities) among all domains. Other views represent different characteristics (diversities) in a target domain; each characteristic is expressed by features extracted in a source domain. Then multi-level distribution matching is employed to improve the transferability of latent features, aiming to reduce misclassification of boundary samples by maximizing discrepancy between different classes and minimizing discrepancy between the same classes. Concurrently, when completing a target task by combining source predictions, instead of averaging source predictions or weighting sources using normalized similarities, the original weights learned by normalizing similarities between source and target domains are adjusted using pseudo target labels to increase the disparities of weight values, which is desired to improve the performance of the final target predictor if the predictions of sources exist significant difference. Experiments on real-world visual data sets demonstrate the superiorities of our proposed method.
迁移学习成为一种有吸引力的技术,通过利用从相似领域(源域)先前获取的知识来处理目标领域的任务。许多现有的迁移学习方法专注于使用单源域学习一个判别器。有时,单源域的知识可能不足以预测目标任务。因此,考虑使用携带更丰富可迁移信息的多个源域来完成目标任务。尽管先前有一些研究处理多源域适应,但这些方法通常通过平均源性能来组合源预测。不同的源域包含不同的可迁移信息;与彼此相比,它们对目标域的贡献可能不同。因此,在预测目标任务时应考虑源贡献。在本文中,我们提出了一种用于域适应的新颖的多源贡献学习方法(MSCLDA)。如所提出的,通过提取多视图特征同时学习域的相似性和多样性。一个视图表示所有域之间的共同特征(相似性)。其他视图表示目标域中的不同特征(多样性);每个特征由在源域中提取的特征表示。然后采用多级分布匹配来提高潜在特征的可迁移性,旨在通过最大化不同类之间的差异并最小化同一类之间的差异来减少边界样本的错误分类。同时,在通过组合源预测完成目标任务时,不是平均源预测或使用归一化相似性对源进行加权,而是使用伪目标标签调整通过对源域和目标域之间的相似性进行归一化而学习到的原始权重,以增加权重值的差异,如果源的预测存在显著差异,这有望提高最终目标预测器的性能。在真实世界视觉数据集上的实验证明了我们提出的方法的优越性。