• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过在线分歧最小化实现自适应领域泛化

Adaptive Domain Generalization Via Online Disagreement Minimization.

作者信息

Zhang Xin, Chen Ying-Cong

出版信息

IEEE Trans Image Process. 2023;32:4247-4258. doi: 10.1109/TIP.2023.3295739. Epub 2023 Jul 26.

DOI:10.1109/TIP.2023.3295739
PMID:37467100
Abstract

Deep neural networks suffer from significant performance deterioration when there exists distribution shift between deployment and training. Domain Generalization (DG) aims to safely transfer a model to unseen target domains by only relying on a set of source domains. Although various DG approaches have been proposed, a recent study named DomainBed (Gulrajani and Lopez-Paz, 2020), reveals that most of them do not beat simple empirical risk minimization (ERM). To this end, we propose a general framework that is orthogonal to existing DG algorithms and could improve their performance consistently. Unlike previous DG works that stake on a static source model to be hopefully a universal one, our proposed AdaODM adaptively modifies the source model at test time for different target domains. Specifically, we create multiple domain-specific classifiers upon a shared domain-generic feature extractor. The feature extractor and classifiers are trained in an adversarial way, where the feature extractor embeds the input samples into a domain-invariant space, and the multiple classifiers capture the distinct decision boundaries that each of them relates to a specific source domain. During testing, distribution differences between target and source domains could be effectively measured by leveraging prediction disagreement among source classifiers. By fine-tuning source models to minimize the disagreement at test time, target-domain features are well aligned to the invariant feature space. We verify AdaODM on two popular DG methods, namely ERM and CORAL, and four DG benchmarks, namely VLCS, PACS, OfficeHome, and TerraIncognita. The results show AdaODM stably improves the generalization capacity on unseen domains and achieves state-of-the-art performance.

摘要

当部署和训练之间存在分布偏移时,深度神经网络的性能会显著下降。领域泛化(DG)旨在仅依靠一组源域将模型安全地转移到未见的目标域。尽管已经提出了各种DG方法,但最近一项名为DomainBed(Gulrajani和Lopez-Paz,2020)的研究表明,其中大多数方法并不优于简单的经验风险最小化(ERM)。为此,我们提出了一个与现有DG算法正交的通用框架,该框架可以持续提高它们的性能。与以往依赖静态源模型以期成为通用模型的DG工作不同,我们提出的AdaODM在测试时针对不同的目标域自适应地修改源模型。具体来说,我们在一个共享的领域通用特征提取器之上创建多个特定领域的分类器。特征提取器和分类器以对抗的方式进行训练,其中特征提取器将输入样本嵌入到一个领域不变空间中,多个分类器捕捉不同的决策边界,每个决策边界都与一个特定的源域相关。在测试期间,可以通过利用源分类器之间的预测不一致来有效测量目标域和源域之间的分布差异。通过微调源模型以在测试时最小化不一致,目标域特征可以很好地与不变特征空间对齐。我们在两种流行的DG方法(即ERM和CORAL)以及四个DG基准(即VLCS、PACS、OfficeHome和TerraIncognita)上验证了AdaODM。结果表明,AdaODM稳定地提高了在未见域上的泛化能力,并实现了当前最优的性能。

相似文献

1
Adaptive Domain Generalization Via Online Disagreement Minimization.通过在线分歧最小化实现自适应领域泛化
IEEE Trans Image Process. 2023;32:4247-4258. doi: 10.1109/TIP.2023.3295739. Epub 2023 Jul 26.
2
Continuous Disentangled Joint Space Learning for Domain Generalization.用于领域泛化的连续解缠关节空间学习
IEEE Trans Neural Netw Learn Syst. 2024 Sep 20;PP. doi: 10.1109/TNNLS.2024.3454689.
3
Mask-Shift-Inference: A novel paradigm for domain generalization.掩模移位推理:一种新颖的领域泛化范例。
Neural Netw. 2024 Nov;179:106629. doi: 10.1016/j.neunet.2024.106629. Epub 2024 Aug 12.
4
INSURE: An Information Theory iNspired diSentanglement and pURification modEl for Domain Generalization.INSURE:一种受信息论启发的用于领域泛化的解纠缠与提纯模型。
IEEE Trans Image Process. 2024;33:3508-3519. doi: 10.1109/TIP.2024.3404241. Epub 2024 Jun 4.
5
Domain generalization for image classification based on simplified self ensemble learning.基于简化自集成学习的图像分类领域泛化
PLoS One. 2025 Apr 4;20(4):e0320300. doi: 10.1371/journal.pone.0320300. eCollection 2025.
6
On the value of label and semantic information in domain generalization.在领域泛化中的标签和语义信息的价值。
Neural Netw. 2023 Jun;163:244-255. doi: 10.1016/j.neunet.2023.03.023. Epub 2023 Mar 29.
7
Domain Adaptive Ensemble Learning.域自适应集成学习
IEEE Trans Image Process. 2021;30:8008-8018. doi: 10.1109/TIP.2021.3112012. Epub 2021 Sep 23.
8
Local domain generalization with low-rank constraint for EEG-based emotion recognition.基于脑电图的情绪识别中具有低秩约束的局部域泛化
Front Neurosci. 2023 Nov 7;17:1213099. doi: 10.3389/fnins.2023.1213099. eCollection 2023.
9
Progressive Invariant Causal Feature Learning for Single Domain Generalization.
IEEE Trans Image Process. 2025;34:2694-2706. doi: 10.1109/TIP.2025.3563772. Epub 2025 May 6.
10
UniAda: Domain Unifying and Adapting Network for Generalizable Medical Image Segmentation.统一自适应网络:用于可泛化医学图像分割的域统一与自适应网络
IEEE Trans Med Imaging. 2025 May;44(5):1988-2001. doi: 10.1109/TMI.2024.3523319. Epub 2025 May 2.