Suppr超能文献

用于领域泛化的平滑引导隐式数据增强

Smooth-Guided Implicit Data Augmentation for Domain Generalization.

作者信息

Wang Mengzhu, Liu Junze, Luo Ge, Wang Shanshan, Wang Wei, Lan Long, Wang Ye, Nie Feiping

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4984-4995. doi: 10.1109/TNNLS.2024.3377439. Epub 2025 Feb 28.

Abstract

The training process of a domain generalization (DG) model involves utilizing one or more interrelated source domains to attain optimal performance on an unseen target domain. Existing DG methods often use auxiliary networks or require high computational costs to improve the model's generalization ability by incorporating a diverse set of source domains. In contrast, this work proposes a method called Smooth-Guided Implicit Data Augmentation (SGIDA) that operates in the feature space to capture the diversity of source domains. To amplify the model's generalization capacity, a distance metric learning (DML) loss function is incorporated. Additionally, rather than depending on deep features, the suggested approach employs logits produced from cross entropy (CE) losses with infinite augmentations. A theoretical analysis shows that logits are effective in estimating distances defined on original features, and the proposed approach is thoroughly analyzed to provide a better understanding of why logits are beneficial for DG. Moreover, to increase the diversity of the source domain, a sampling-based method called smooth is introduced to obtain semantic directions from interclass relations. The effectiveness of the proposed approach is demonstrated through extensive experiments on widely used DG, object detection, and remote sensing datasets, where it achieves significant improvements over existing state-of-the-art methods across various backbone networks.

摘要

域泛化(DG)模型的训练过程涉及利用一个或多个相关的源域,以在一个未见过的目标域上获得最佳性能。现有的DG方法通常使用辅助网络,或者需要高昂的计算成本,通过纳入各种不同的源域来提高模型的泛化能力。相比之下,这项工作提出了一种名为平滑引导隐式数据增强(SGIDA)的方法,该方法在特征空间中运行,以捕捉源域的多样性。为了增强模型的泛化能力,纳入了一种距离度量学习(DML)损失函数。此外,所提出的方法不是依赖于深度特征,而是采用由具有无限增强的交叉熵(CE)损失产生的对数its。理论分析表明,对数its在估计原始特征上定义的距离方面是有效的,并且对所提出的方法进行了全面分析,以更好地理解为什么对数its对DG有益。此外,为了增加源域的多样性,引入了一种名为平滑的基于采样的方法,以从类间关系中获得语义方向。通过在广泛使用的DG、目标检测和遥感数据集上进行的大量实验,证明了所提出方法的有效性,在各种骨干网络上,它相对于现有的最先进方法都取得了显著的改进。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验