Suppr超能文献

注意力引导的多源与目标域适应

Attention Guided Multiple Source and Target Domain Adaptation.

作者信息

Wang Yuxi, Zhang Zhaoxiang, Hao Wangli, Song Chunfeng

出版信息

IEEE Trans Image Process. 2021;30:892-906. doi: 10.1109/TIP.2020.3031161. Epub 2020 Dec 8.

Abstract

Domain adaptation aims to alleviate the distribution discrepancy between source and target domains. Most conventional methods focus on one target domain setting adapted from one or multiple source domains while neglecting the multi-target domain setting. We argue that different target domains also have complementary information, which is very important for performance improvement. In this paper, we propose an Attention-guided Multiple source-and-target Domain Adaptation (AMDA) method to capture the context dependency information on transferable regions among multiple source and target domains. The innovation points of this paper are as follows: (1) We use numerous adversarial strategies to harvest sufficient information from multiple source and target domains, which extends the generalization and robustness of the feature pools. (2) We propose an intra-domain and inter-domain attention module to explore transferable context information. The proposed attention module can learn domain-invariant representations and reduce the negative transfer by focusing on transferable knowledge. Extensive experiments validate the effectiveness of our method with achieving state-of-the-art performance on several unsupervised domain adaptation datasets.

摘要

域适应旨在减轻源域和目标域之间的分布差异。大多数传统方法专注于从一个或多个源域适应到一个目标域的设置,而忽略了多目标域设置。我们认为不同的目标域也具有互补信息,这对性能提升非常重要。在本文中,我们提出了一种注意力引导的多源-目标域适应(AMDA)方法,以捕获多个源域和目标域之间可转移区域上的上下文依赖信息。本文的创新点如下:(1)我们使用众多对抗策略从多个源域和目标域中获取足够的信息,这扩展了特征池的泛化能力和鲁棒性。(2)我们提出了一个域内和域间注意力模块来探索可转移的上下文信息。所提出的注意力模块可以学习域不变表示,并通过关注可转移知识来减少负迁移。大量实验验证了我们方法的有效性,在几个无监督域适应数据集上达到了当前最优性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验