School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China; SMILE Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, N. T. Hong Kong, China.
Neural Netw. 2020 Jul;127:182-192. doi: 10.1016/j.neunet.2020.03.025. Epub 2020 Apr 24.
The accuracy of deep learning (e.g., convolutional neural networks) for an image classification task critically relies on the amount of labeled training data. Aiming to solve an image classification task on a new domain that lacks labeled data but gains access to cheaply available unlabeled data, unsupervised domain adaptation is a promising technique to boost the performance without incurring extra labeling cost, by assuming images from different domains share some invariant characteristics. In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of deep neural networks to tackle cross-domain image classification tasks. In contrast to the existing unsupervised domain adaption approaches, the proposed DART not only learns domain-invariant features via adversarial training, but also achieves robust domain-adaptive classification via a residual-transfer strategy, all in an end-to-end training framework. We evaluate the performance of the proposed method for cross-domain image classification tasks on several well-known benchmark data sets, in which our method clearly outperforms the state-of-the-art approaches.
深度学习(例如卷积神经网络)在图像分类任务中的准确性严重依赖于标注训练数据的数量。为了解决新领域的图像分类任务,这些领域缺乏标注数据,但可以获得廉价的未标注数据,无监督领域自适应是一种很有前途的技术,可以在不增加额外标注成本的情况下提高性能,假设来自不同领域的图像共享一些不变的特征。在本文中,我们提出了一种新的无监督领域自适应方法,名为深度神经网络的对抗残差迁移(DART)学习,以解决跨领域图像分类任务。与现有的无监督领域自适应方法不同,所提出的 DART 不仅通过对抗训练学习领域不变特征,而且通过残差迁移策略实现稳健的领域自适应分类,所有这些都在端到端训练框架中完成。我们在几个著名的基准数据集上评估了所提出的方法在跨领域图像分类任务中的性能,其中我们的方法明显优于最先进的方法。