Suppr超能文献

基于简化自集成学习的图像分类领域泛化

Domain generalization for image classification based on simplified self ensemble learning.

作者信息

Qin Zhenkai, Guo Xinlu, Li Jun, Chen Yue

机构信息

College of Information Technology, Guangxi Police College, Nanning,China.

College of Information Engineering, China Jiliang University, Hangzhou,China.

出版信息

PLoS One. 2025 Apr 4;20(4):e0320300. doi: 10.1371/journal.pone.0320300. eCollection 2025.

Abstract

Domain generalization seeks to acquire knowledge from limited source data and apply it to an unknown target domain. Current approaches primarily tackle this challenge by attempting to eliminate the differences between domains. However, as cross-domain data evolves, the discrepancies between domains grow increasingly intricate and difficult to manage, rendering effective knowledge transfer across multiple domains a persistent challenge. While existing methods concentrate on minimizing domain discrepancies, they frequently encounter difficulties in maintaining effectiveness when confronted with high data complexity. In this paper, we present an approach that transcends merely eliminating domain discrepancies by enhancing the model's adaptability to improve its performance in unseen domains. Specifically, we frame the problem as an optimization process with the objective of minimizing a weighted loss function that balances cross-domain discrepancies and sample complexity. Our proposed self-ensemble learning framework, which utilizes a single feature extractor, simplifies this process by alternately training multiple classifiers with shared feature extractors. The introduction of focal loss and complex sample loss weight further fine-tunes the model's sensitivity to hard-to-learn instances, enhancing generalization to difficult samples. Finally, a dynamic loss adaptive weighted voting strategy ensures more accurate predictions across diverse domains. Experimental results on three public benchmark datasets (OfficeHome, PACS, and VLCS) demonstrate that our proposed algorithm achieves an improvement of up to 3 . 38% over existing methods in terms of generalization performance, particularly in complex and diverse real-world scenarios, such as autonomous driving and medical image analysis. These results highlight the practical utility of our approach in environments where cross-domain generalization is crucial for system reliability and safety.

摘要

领域泛化旨在从有限的源数据中获取知识,并将其应用于未知的目标领域。当前的方法主要通过试图消除领域之间的差异来应对这一挑战。然而,随着跨域数据的发展,领域之间的差异变得越来越复杂且难以管理,使得跨多个领域进行有效的知识转移一直是一个挑战。虽然现有方法专注于最小化领域差异,但在面对高数据复杂性时,它们在保持有效性方面经常遇到困难。在本文中,我们提出了一种方法,该方法不仅仅是通过增强模型的适应性来消除领域差异,以提高其在未见领域中的性能。具体来说,我们将问题构建为一个优化过程,目标是最小化一个加权损失函数,该函数平衡跨域差异和样本复杂性。我们提出的自集成学习框架利用单个特征提取器,通过使用共享特征提取器交替训练多个分类器来简化这一过程。引入焦点损失和复杂样本损失权重进一步微调了模型对难学习实例的敏感性,增强了对困难样本的泛化能力。最后,一种动态损失自适应加权投票策略确保在不同领域进行更准确的预测。在三个公共基准数据集(OfficeHome、PACS和VLCS)上的实验结果表明,我们提出的算法在泛化性能方面比现有方法提高了高达3.38%,特别是在复杂多样的现实世界场景中,如自动驾驶和医学图像分析。这些结果突出了我们的方法在跨域泛化对系统可靠性和安全性至关重要的环境中的实际效用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c875/11970687/a44f4b0326d4/pone.0320300.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验