• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过对比学习的合成源通用域自适应。

Synthetic Source Universal Domain Adaptation through Contrastive Learning.

机构信息

School of Computing, Gachon University, Seongnam 13120, Korea.

出版信息

Sensors (Basel). 2021 Nov 12;21(22):7539. doi: 10.3390/s21227539.

DOI:10.3390/s21227539
PMID:34833615
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8620052/
Abstract

Universal domain adaptation (UDA) is a crucial research topic for efficient deep learning model training using data from various imaging sensors. However, its development is affected by unlabeled target data. Moreover, the nonexistence of prior knowledge of the source and target domain makes it more challenging for UDA to train models. I hypothesize that the degradation of trained models in the target domain is caused by the lack of direct training loss to improve the discriminative power of the target domain data. As a result, the target data adapted to the source representations is biased toward the source domain. I found that the degradation was more pronounced when I used synthetic data for the source domain and real data for the target domain. In this paper, I propose a UDA method with target domain contrastive learning. The proposed method enables models to leverage synthetic data for the source domain and train the discriminativeness of target features in an unsupervised manner. In addition, the target domain feature extraction network is shared with the source domain classification task, preventing unnecessary computational growth. Extensive experimental results on VisDa-2017 and MNIST to SVHN demonstrated that the proposed method significantly outperforms the baseline by 2.7% and 5.1%, respectively.

摘要

通用领域自适应(UDA)是使用来自各种成像传感器的数据高效训练深度学习模型的一个重要研究课题。然而,它的发展受到未标记的目标数据的影响。此外,由于源域和目标域没有先验知识,因此 UDA 更具挑战性。我假设在目标域中训练模型的退化是由于缺乏直接的训练损失来提高目标域数据的判别能力。因此,适应源表示的目标数据偏向于源域。我发现,当我使用源域的合成数据和目标域的真实数据时,退化更加明显。在本文中,我提出了一种具有目标域对比学习的 UDA 方法。该方法使模型能够利用源域的合成数据,并以无监督的方式训练目标特征的判别能力。此外,目标域特征提取网络与源域分类任务共享,防止不必要的计算增长。在 VisDa-2017 和 MNIST 到 SVHN 上的广泛实验结果表明,该方法分别比基线提高了 2.7%和 5.1%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/03e754cdacd4/sensors-21-07539-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/c2a73a047a7f/sensors-21-07539-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/c7d10b6afbe2/sensors-21-07539-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/aec07c6b4711/sensors-21-07539-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/37ce8a7c7bd5/sensors-21-07539-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/03e754cdacd4/sensors-21-07539-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/c2a73a047a7f/sensors-21-07539-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/c7d10b6afbe2/sensors-21-07539-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/aec07c6b4711/sensors-21-07539-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/37ce8a7c7bd5/sensors-21-07539-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77ba/8620052/03e754cdacd4/sensors-21-07539-g005.jpg

相似文献

1
Synthetic Source Universal Domain Adaptation through Contrastive Learning.通过对比学习的合成源通用域自适应。
Sensors (Basel). 2021 Nov 12;21(22):7539. doi: 10.3390/s21227539.
2
Joint Clustering and Discriminative Feature Alignment for Unsupervised Domain Adaptation.联合聚类和判别特征对齐的无监督域自适应。
IEEE Trans Image Process. 2021;30:7842-7855. doi: 10.1109/TIP.2021.3109530. Epub 2021 Sep 16.
3
Unsupervised Domain Adaptation with Asymmetrical Margin Disparity loss and Outlier Sample Extraction.无监督领域自适应学习中的不对称边缘差异损失与异常样本提取。
Neural Netw. 2023 Nov;168:602-614. doi: 10.1016/j.neunet.2023.09.045. Epub 2023 Sep 27.
4
Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation.基于标签一致性的自适应对比学习在源数据自由无监督域自适应中的应用。
Sensors (Basel). 2022 Jun 2;22(11):4238. doi: 10.3390/s22114238.
5
Contrastive Adaptation Network for Single- and Multi-Source Domain Adaptation.对比适应网络用于单源域和多源域自适应。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1793-1804. doi: 10.1109/TPAMI.2020.3029948. Epub 2022 Mar 4.
6
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
7
CALDA: Improving Multi-Source Time Series Domain Adaptation With Contrastive Adversarial Learning.CALDA:通过对比对抗学习改进多源时间序列域适应
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14208-14221. doi: 10.1109/TPAMI.2023.3298346. Epub 2023 Nov 6.
8
Margin Preserving Self-Paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation.保留边界的自定进度对比学习在医学图像分割中的域自适应。
IEEE J Biomed Health Inform. 2022 Feb;26(2):638-647. doi: 10.1109/JBHI.2022.3140853. Epub 2022 Feb 4.
9
LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation.LE-UDA:用于医学图像分割的标签高效无监督域适应
IEEE Trans Med Imaging. 2023 Mar;42(3):633-646. doi: 10.1109/TMI.2022.3214766. Epub 2023 Mar 2.
10
Class-Incremental Unsupervised Domain Adaptation via Pseudo-Label Distillation.通过伪标签蒸馏实现类别增量无监督域适应
IEEE Trans Image Process. 2024;33:1188-1198. doi: 10.1109/TIP.2024.3357258. Epub 2024 Feb 9.

本文引用的文献

1
Building a Compact Convolutional Neural Network for Embedded Intelligent Sensor Systems Using Group Sparsity and Knowledge Distillation.使用分组稀疏和知识蒸馏技术为嵌入式智能传感器系统构建紧凑的卷积神经网络。
Sensors (Basel). 2019 Oct 4;19(19):4307. doi: 10.3390/s19194307.
2
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.