Suppr超能文献

无监督领域自适应学习迁移参数。

Learning Transferable Parameters for Unsupervised Domain Adaptation.

出版信息

IEEE Trans Image Process. 2022;31:6424-6439. doi: 10.1109/TIP.2022.3184848. Epub 2022 Oct 21.

Abstract

Unsupervised domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled target domain under the distribution shift. Thanks to the strong representation ability of deep neural networks, recent remarkable achievements in UDA resort to learning domain-invariant features. Intuitively, the goal is that a good feature representation and the hypothesis learned from the source domain can generalize well to the target domain. However, the learning processes of domain-invariant features and source hypotheses inevitably involve domain-specific information that would degrade the generalizability of UDA models on the target domain. The lottery ticket hypothesis proves that only partial parameters are essential for generalization. Motivated by it, we find in this paper that only partial parameters are essential for learning domain-invariant information. Such parameters are termed transferable parameters that can generalize well in UDA. In contrast, the rest parameters tend to fit domain-specific details and often cause the failure of generalization, which are termed untransferable parameters. Driven by this insight, we propose Transferable Parameter Learning (TransPar) to reduce the side effect of domain-specific information in the learning process and thus enhance the memorization of domain-invariant information. Specifically, according to the distribution discrepancy degree, we divide all parameters into transferable and untransferable ones in each training iteration. We then perform separate update rules for the two types of parameters. Extensive experiments on image classification and regression tasks (keypoint detection) show that TransPar outperforms prior arts by non-trivial margins. Moreover, experiments demonstrate that TransPar can be integrated into the most popular deep UDA networks and be easily extended to handle any data distribution shift scenarios.

摘要

无监督领域自适应 (UDA) 使学习机器能够在分布转移的情况下,从有标签的源域适应到无标签的目标域。由于深度神经网络的强大表示能力,最近在 UDA 方面的显著成就依赖于学习域不变特征。直观地说,目标是良好的特征表示和从源域学习的假设能够很好地泛化到目标域。然而,域不变特征和源假设的学习过程不可避免地涉及到特定于域的信息,这会降低 UDA 模型在目标域上的泛化能力。彩票假说证明了只有部分参数对于泛化是必要的。受其启发,我们在本文中发现,只有部分参数对于学习域不变信息是必要的。这些参数称为可转移参数,可以在 UDA 中很好地泛化。相比之下,其余参数倾向于拟合特定于域的细节,并且经常导致泛化失败,这些参数称为不可转移参数。受此启发,我们提出了可转移参数学习 (TransPar) 来减少学习过程中特定于域信息的副作用,从而增强对域不变信息的记忆。具体来说,根据分布差异程度,我们在每个训练迭代中将所有参数分为可转移和不可转移参数。然后,我们为这两种类型的参数执行单独的更新规则。在图像分类和回归任务(关键点检测)上的广泛实验表明,TransPar 以不可忽视的优势优于先前的工作。此外,实验表明,TransPar 可以集成到最流行的深度 UDA 网络中,并很容易扩展到处理任何数据分布转移场景。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验