Suppr超能文献

用于监督域适应的中心迁移

Center transfer for supervised domain adaptation.

作者信息

Huang Xiuyu, Zhou Nan, Huang Jian, Zhang Huaidong, Pedrycz Witold, Choi Kup-Sze

机构信息

Center for Smart Health, The Hong Kong Polytechnic University, Hong Kong SAR, 999077 China.

Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3 Canada.

出版信息

Appl Intell (Dordr). 2023 Jan 26:1-17. doi: 10.1007/s10489-022-04414-2.

Abstract

Domain adaptation (DA) is a popular strategy for pattern recognition and classification tasks. It leverages a large amount of data from the source domain to help train the model applied in the target domain. Supervised domain adaptation (SDA) approaches are desirable when only few labeled samples from the target domain are available. They can be easily adopted in many real-world applications where data collection is expensive. In this study, we propose a new supervision signal, namely center transfer loss (CTL), to efficiently align features under the SDA setting in the deep learning (DL) field. Unlike most previous SDA methods that rely on pairing up training samples, the proposed loss is trainable only using one-stream input based on the mini-batch strategy. The CTL exhibits two main functionalities in training to increase the performance of DL models, i.e., domain alignment and increasing the feature's discriminative power. The hyper-parameter to balance these two functionalities is waived in CTL, which is the second improvement from the previous approaches. Extensive experiments completed on well-known public datasets show that the proposed method performs better than recent state-of-the-art approaches.

摘要

域适应(DA)是模式识别和分类任务中一种流行的策略。它利用来自源域的大量数据来帮助训练应用于目标域的模型。当目标域只有少量标记样本时,监督域适应(SDA)方法是可取的。它们可以很容易地应用于许多数据收集成本高昂的实际应用中。在本研究中,我们提出了一种新的监督信号,即中心转移损失(CTL),以在深度学习(DL)领域的SDA设置下有效地对齐特征。与大多数以前依赖于配对训练样本的SDA方法不同,所提出的损失仅基于小批量策略使用单流输入即可训练。CTL在训练中展现出两个主要功能,以提高DL模型的性能,即域对齐和增强特征的判别能力。在CTL中,平衡这两个功能的超参数被省去了,这是相对于以前方法的第二个改进。在著名的公共数据集上完成的大量实验表明,所提出的方法比最近的最先进方法表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/9878501/0f161a5a5724/10489_2022_4414_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验