Shao Youjia, Wang Changshuo, Jia Qihang, Zhao Wencang
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China.
Cyber Security Research Centre@ NTU, Nanyang Technological University, Singapore 639798, Singapore.
Neural Netw. 2025 Nov;191:107757. doi: 10.1016/j.neunet.2025.107757. Epub 2025 Jun 21.
Domain generalization is proposed as an approach capable of solving the domain shift challenge, which aims at generalizing knowledge learned from multiple source domains with different distributions to the target domain that is invisible during the training process. A range of domain generalization methods include unstable domain-specific features when performing domain-invariant representation learning. Our method is dedicated to comprehensive and explicit feature disentanglement, which realizes the independence of domain-invariant and domain-specific features and reduces the spurious reliance on domain-specific features with pursuing sufficient stable semantics. In this regard, the novel learning paradigm of Source Split-flow Disentanglement with Smoothness-Fine-grained Feature Mitigation (SSDS-FFM) is presented. Firstly, we propose the source split-flow structure where the domain-invariant feature extractor and the domain-specific feature extractor share the same shallow layer and are split into two independent flows. Mutual information minimization is utilized to separate the two features. At the same time, we avoid the highly confident domain classifier and introduce domain label smoothing to predict the corresponding soft probabilities, which is combined with structural design to ensure the learning of domain-invariant representations. Secondly, to further enhance class discriminability, we propose fine-grained feature mitigation to perform selective reverse contrastive learning, which can address local domain misalignment and alleviate over-compressed feature space with obtaining sufficient stable semantics. Our paradigm is logical and can achieve comprehensive feature disentanglement to preform stable domain-invariant representation learning, promoting the improvement of generalization ability. Extensive experimental results on PACS, VLCS, Office-Home and DomainNet datasets verify the effectiveness and superiority of the proposed SSDS-FFM.
领域泛化被提出作为一种能够解决领域转移挑战的方法,其目标是将从具有不同分布的多个源域中学到的知识推广到训练过程中不可见的目标域。一系列领域泛化方法在进行领域不变表示学习时包括不稳定的特定领域特征。我们的方法致力于全面且明确的特征解缠,实现领域不变特征和特定领域特征的独立性,并在追求足够稳定语义的同时减少对特定领域特征的虚假依赖。在这方面,提出了具有平滑细粒度特征缓解的源分流解缠(SSDS-FFM)的新颖学习范式。首先,我们提出源分流结构,其中领域不变特征提取器和特定领域特征提取器共享相同的浅层并被拆分为两个独立的流。利用互信息最小化来分离这两个特征。同时,我们避免使用高度自信的领域分类器并引入领域标签平滑来预测相应的软概率,将其与结构设计相结合以确保领域不变表示的学习。其次,为了进一步增强类可辨别性,我们提出细粒度特征缓解来执行选择性反向对比学习,这可以解决局部领域不对齐问题并通过获得足够稳定的语义来缓解过度压缩的特征空间。我们的范式是合理的,并且可以实现全面的特征解缠以进行稳定的领域不变表示学习,促进泛化能力的提高。在PACS、VLCS、Office-Home和DomainNet数据集上的大量实验结果验证了所提出的SSDS-FFM的有效性和优越性。