• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

掩模移位推理:一种新颖的领域泛化范例。

Mask-Shift-Inference: A novel paradigm for domain generalization.

机构信息

College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao Shandong 266061, China.

College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao Shandong 266061, China; Qingdao Institute of Intelligent Navigation and Control, Qingdao Shandong 266071, China.

出版信息

Neural Netw. 2024 Nov;179:106629. doi: 10.1016/j.neunet.2024.106629. Epub 2024 Aug 12.

DOI:10.1016/j.neunet.2024.106629
PMID:39153401
Abstract

Domain Generalization (DG) focuses on the Out-Of-Distribution (OOD) generalization, which is able to learn a robust model that generalizes the knowledge acquired from the source domain to the unseen target domain. However, due to the existence of the domain shift, domain-invariant representation learning is challenging. Guided by fine-grained knowledge, we propose a novel paradigm Mask-Shift-Inference (MSI) for DG based on the architecture of Convolutional Neural Networks (CNN). Different from relying on a series of constraints and assumptions for model optimization, this paradigm novelly shifts the focus to feature channels in the latent space for domain-invariant representation learning. We put forward a two-branch working mode of a main module and multiple domain-specific sub-modules. The latter can only achieve good prediction performance in its own specific domain but poor predictions in other source domains, which provides the main module with the fine-grained knowledge guidance and contributes to the improvement of the cognitive ability of MSI. Firstly, during the forward propagation of the main module, the proposed MSI accurately discards unstable channels based on spurious classifications varying across domains, which have domain-specific prediction limitations and are not conducive to generalization. In this process, a progressive scheme is adopted to adaptively increase the masking ratio according to the training progress to further reduce the risk of overfitting. Subsequently, our paradigm enters the compatible shifting stage before the formal prediction. Based on maximizing semantic retention, we implement the domain style matching and shifting through the simple transformation through Fourier transform, which can explicitly and safely shift the target domain back to the source domain whose style is closest to it, requiring no additional model updates and reducing the domain gap. Eventually, the paradigm MSI enters the formal inference stage. The updated target domain is predicted in the main module trained in the previous stage with the benefit of familiar knowledge from the nearest source domain masking scheme. Our paradigm is logically progressive, which can intuitively exclude the confounding influence of domain-specific spurious information along with mitigating domain shifts and implicitly perform semantically invariant representation learning, achieving robust OOD generalization. Extensive experimental results on PACS, VLCS, Office-Home and DomainNet datasets verify the superiority and effectiveness of the proposed method.

摘要

域泛化 (DG) 专注于离群 (OOD) 泛化,它能够学习一个稳健的模型,将从源域获得的知识推广到未见过的目标域。然而,由于存在域转移,域不变表示学习具有挑战性。受细粒度知识的指导,我们提出了一种基于卷积神经网络 (CNN) 架构的新的域泛化范例——掩模移位推理 (MSI)。与依赖于一系列约束和假设来优化模型不同,该范例新颖地将重点转移到潜在空间中的特征通道上,以进行域不变表示学习。我们提出了一个主模块和多个特定领域子模块的双分支工作模式。后者只能在其自己的特定领域中实现良好的预测性能,但在其他源域中的预测效果较差,这为主模块提供了细粒度的知识指导,有助于提高 MSI 的认知能力。首先,在主模块的正向传播过程中,我们提出的 MSI 基于跨域变化的虚假分类准确地丢弃不稳定的通道,这些通道具有特定领域的预测局限性,不利于泛化。在此过程中,采用渐进式方案根据训练进度自适应地增加掩蔽比,以进一步降低过拟合的风险。随后,我们的范例在正式预测之前进入兼容移位阶段。基于最大限度地保留语义,我们通过傅里叶变换的简单变换来实现域风格匹配和移位,可以显式且安全地将目标域转移回与其风格最接近的源域,无需额外的模型更新,减少域差距。最终,范例 MSI 进入正式推断阶段。在前一阶段训练的主模块中预测更新后的目标域,受益于来自最近源域掩蔽方案的熟悉知识。我们的范例是逻辑上渐进的,可以直观地排除特定领域虚假信息的混杂影响,同时缓解域转移,并隐式地进行语义不变表示学习,实现稳健的 OOD 泛化。在 PACS、VLCS、Office-Home 和 DomainNet 数据集上的广泛实验结果验证了所提出方法的优越性和有效性。

相似文献

1
Mask-Shift-Inference: A novel paradigm for domain generalization.掩模移位推理:一种新颖的领域泛化范例。
Neural Netw. 2024 Nov;179:106629. doi: 10.1016/j.neunet.2024.106629. Epub 2024 Aug 12.
2
Adaptive Domain Generalization Via Online Disagreement Minimization.通过在线分歧最小化实现自适应领域泛化
IEEE Trans Image Process. 2023;32:4247-4258. doi: 10.1109/TIP.2023.3295739. Epub 2023 Jul 26.
3
It takes two: Dual Branch Augmentation Module for domain generalization.双分支增强模块用于领域泛化。
Neural Netw. 2024 Apr;172:106094. doi: 10.1016/j.neunet.2023.106094. Epub 2024 Jan 2.
4
Learning Generalizable Models via Disentangling Spurious and Enhancing Potential Correlations.通过分离虚假相关性并增强潜在相关性来学习通用模型。
IEEE Trans Image Process. 2024;33:1627-1642. doi: 10.1109/TIP.2024.3361689. Epub 2024 Feb 27.
5
INSURE: An Information Theory iNspired diSentanglement and pURification modEl for Domain Generalization.INSURE:一种受信息论启发的用于领域泛化的解纠缠与提纯模型。
IEEE Trans Image Process. 2024;33:3508-3519. doi: 10.1109/TIP.2024.3404241. Epub 2024 Jun 4.
6
Local domain generalization with low-rank constraint for EEG-based emotion recognition.基于脑电图的情绪识别中具有低秩约束的局部域泛化
Front Neurosci. 2023 Nov 7;17:1213099. doi: 10.3389/fnins.2023.1213099. eCollection 2023.
7
Ensemble machine learning model trained on a new synthesized dataset generalizes well for stress prediction using wearable devices.在新合成数据集上训练的集成机器学习模型,对于使用可穿戴设备进行压力预测具有良好的泛化能力。
J Biomed Inform. 2023 Dec;148:104556. doi: 10.1016/j.jbi.2023.104556. Epub 2023 Dec 2.
8
Evolving Domain Generalization via Latent Structure-Aware Sequential Autoencoder.通过潜在结构感知序列自动编码器实现的进化域泛化
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14514-14527. doi: 10.1109/TPAMI.2023.3319984. Epub 2023 Nov 3.
9
Improving domain generalization performance for medical image segmentation via random feature augmentation.通过随机特征增强提高医学图像分割的领域泛化性能。
Methods. 2023 Oct;218:149-157. doi: 10.1016/j.ymeth.2023.08.003. Epub 2023 Aug 10.
10
On the value of label and semantic information in domain generalization.在领域泛化中的标签和语义信息的价值。
Neural Netw. 2023 Jun;163:244-255. doi: 10.1016/j.neunet.2023.03.023. Epub 2023 Mar 29.