Luo Yong, Wen Yonggang, Liu Tongliang, Tao Dacheng
IEEE Trans Pattern Anal Mach Intell. 2019 Apr;41(4):1013-1026. doi: 10.1109/TPAMI.2018.2824309. Epub 2018 Apr 9.
The goal of transfer learning is to improve the performance of target learning task by leveraging information (or transferring knowledge) from other related tasks. In this paper, we examine the problem of transfer distance metric learning (DML), which usually aims to mitigate the label information deficiency issue in the target DML. Most of the current Transfer DML (TDML) methods are not applicable to the scenario where data are drawn from heterogeneous domains. Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace. However, these approaches lack flexibility in real-world applications, and the learned transformations are often restricted to be linear. This motivates us to develop a general flexible heterogeneous TDML (HTDML) framework. In particular, any (linear/nonlinear) DML algorithms can be employed to learn the source metric beforehand. Then the pre-learned source metric is represented as a set of knowledge fragments to help target metric learning. We show how generalization error in the target domain could be reduced using the proposed transfer strategy, and develop novel algorithm to learn either linear or nonlinear target metric. Extensive experiments on various applications demonstrate the effectiveness of the proposed method.
迁移学习的目标是通过利用来自其他相关任务的信息(或迁移知识)来提高目标学习任务的性能。在本文中,我们研究迁移距离度量学习(DML)问题,其通常旨在缓解目标DML中的标签信息不足问题。当前大多数迁移DML(TDML)方法不适用于从异构域中抽取数据的场景。一些现有的异构迁移学习(HTL)方法通常通过将源域和目标域的样本变换到一个公共子空间来学习目标距离度量。然而,这些方法在实际应用中缺乏灵活性,并且所学习的变换通常被限制为线性的。这促使我们开发一个通用的灵活异构TDML(HTDML)框架。具体而言,任何(线性/非线性)DML算法都可用于预先学习源度量。然后,将预先学习的源度量表示为一组知识片段,以帮助目标度量学习。我们展示了如何使用所提出的迁移策略来降低目标域中的泛化误差,并开发了新颖的算法来学习线性或非线性目标度量。在各种应用上进行的大量实验证明了所提方法的有效性。