Wei Kun, Yang Xu, Xu Zhe, Deng Cheng
IEEE Trans Image Process. 2024;33:1188-1198. doi: 10.1109/TIP.2024.3357258. Epub 2024 Feb 9.
Class-Incremental Unsupervised Domain Adaptation (CI-UDA) requires the model can continually learn several steps containing unlabeled target domain samples, while the source-labeled dataset is available all the time. The key to tackling CI-UDA problem is to transfer domain-invariant knowledge from the source domain to the target domain, and preserve the knowledge of the previous steps in the continual adaptation process. However, existing methods introduce much biased source knowledge for the current step, causing negative transfer and unsatisfying performance. To tackle these problems, we propose a novel CI-UDA method named Pseudo-Label Distillation Continual Adaptation (PLDCA). We design Pseudo-Label Distillation module to leverage the discriminative information of the target domain to filter the biased knowledge at the class- and instance-level. In addition, Contrastive Alignment is proposed to reduce domain discrepancy by aligning the class-level feature representation of the confident target samples and the source domain, and exploit the robust feature representation of the unconfident target samples at the instance-level. Extensive experiments demonstrate the effectiveness and superiority of PLDCA. Code is available at code.
类别增量无监督域适应(CI-UDA)要求模型能够在源标记数据集始终可用的情况下,通过包含未标记目标域样本的几个步骤持续学习。解决CI-UDA问题的关键在于将域不变知识从源域转移到目标域,并在持续适应过程中保留先前步骤的知识。然而,现有方法在当前步骤中引入了大量有偏差的源知识,导致负迁移和不尽人意的性能。为了解决这些问题,我们提出了一种名为伪标签蒸馏持续适应(PLDCA)的新型CI-UDA方法。我们设计了伪标签蒸馏模块,以利用目标域的判别信息在类别和实例级别过滤有偏差的知识。此外,还提出了对比对齐,通过对齐可信目标样本和源域的类别级特征表示来减少域差异,并在实例级别利用不可信目标样本的鲁棒特征表示。大量实验证明了PLDCA的有效性和优越性。代码可在code获取。