Suppr超能文献

基于引导迁移学习方法和 JS 散度的图像分类和目标检测的无监督域自适应

Unsupervised Domain Adaptation for Image Classification and Object Detection Using Guided Transfer Learning Approach and JS Divergence.

机构信息

Computer Science & Engineering Department, Devang Patel Institute of Advance Technology and Research (DEPSTAR), Faculty of Technology & Engineering, Charotar University of Science and Technology (CHARUSAT), Changa 388421, Anand, India.

U & P U Patel Department of Computer Engineering, Chandubhai S. Patel Institute of Technology (CSPIT), Faculty of Technology & Engineering, Charotar University of Science and Technology (CHARUSAT), Changa 388421, Anand, India.

出版信息

Sensors (Basel). 2023 Apr 30;23(9):4436. doi: 10.3390/s23094436.

Abstract

Unsupervised domain adaptation (UDA) is a transfer learning technique utilized in deep learning. UDA aims to reduce the distribution gap between labeled source and unlabeled target domains by adapting a model through fine-tuning. Typically, UDA approaches assume the same categories in both domains. The effectiveness of transfer learning depends on the degree of similarity between the domains, which determines an efficient fine-tuning strategy. Furthermore, domain-specific tasks generally perform well when the feature distributions of the domains are similar. However, utilizing a trained source model directly in the target domain may not generalize effectively due to domain shift. Domain shift can be caused by intra-class variations, camera sensor variations, background variations, and geographical changes. To address these issues, we design an efficient unsupervised domain adaptation network for image classification and object detection that can learn transferable feature representations and reduce the domain shift problem in a unified network. We propose the guided transfer learning approach to select the layers for fine-tuning the model, which enhances feature transferability and utilizes the JS-Divergence to minimize the domain discrepancy between the domains. We evaluate our proposed approaches using multiple benchmark datasets. Our domain adaptive image classification approach achieves 93.2% accuracy on the Office-31 dataset and 75.3% accuracy on the Office-Home dataset. In addition, our domain adaptive object detection approach achieves 51.1% mAP on the Foggy Cityscapes dataset and 72.7% mAP on the Indian Vehicle dataset. We conduct extensive experiments and ablation studies to demonstrate the effectiveness and efficiency of our work. Experimental results also show that our work significantly outperforms the existing methods.

摘要

无监督领域自适应(UDA)是一种在深度学习中使用的迁移学习技术。UDA 的目的是通过微调来适应模型,从而减少标记源域和未标记目标域之间的分布差距。通常,UDA 方法假设两个域中的类别相同。迁移学习的有效性取决于域之间的相似程度,这决定了有效的微调策略。此外,当域的特征分布相似时,特定于域的任务通常表现良好。然而,由于域转移,直接在目标域中使用训练有素的源模型可能无法有效地泛化。域转移可能是由类内变化、相机传感器变化、背景变化和地理变化引起的。为了解决这些问题,我们设计了一种用于图像分类和目标检测的高效无监督域自适应网络,可以学习可转移的特征表示,并在统一的网络中减少域转移问题。我们提出了引导式迁移学习方法来选择用于微调模型的层,这增强了特征的可转移性,并利用 JS 散度来最小化域之间的差异。我们使用多个基准数据集来评估我们的方法。我们的域自适应图像分类方法在 Office-31 数据集上达到了 93.2%的准确率,在 Office-Home 数据集上达到了 75.3%的准确率。此外,我们的域自适应目标检测方法在 Foggy Cityscapes 数据集上实现了 51.1%的 mAP,在 Indian Vehicle 数据集上实现了 72.7%的 mAP。我们进行了广泛的实验和消融研究,以证明我们工作的有效性和效率。实验结果还表明,我们的工作明显优于现有方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/22ef/10181527/db834df4c79f/sensors-23-04436-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验