School of Computing, Gachon University, Seongnam 13120, Korea.
Sensors (Basel). 2021 Nov 12;21(22):7539. doi: 10.3390/s21227539.
Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively.
通用领域自适应(UDA)是使用来自各种成像传感器的数据高效训练深度学习模型的一个重要研究课题。然而,它的发展受到未标记的目标数据的影响。此外,由于源域和目标域没有先验知识,因此 UDA 更具挑战性。我假设在目标域中训练模型的退化是由于缺乏直接的训练损失来提高目标域数据的判别能力。因此,适应源表示的目标数据偏向于源域。我发现,当我使用源域的合成数据和目标域的真实数据时,退化更加明显。在本文中,我提出了一种具有目标域对比学习的 UDA 方法。该方法使模型能够利用源域的合成数据,并以无监督的方式训练目标特征的判别能力。此外,目标域特征提取网络与源域分类任务共享,防止不必要的计算增长。在 VisDa-2017 和 MNIST 到 SVHN 上的广泛实验结果表明,该方法分别比基线提高了 2.7%和 5.1%。