Wang Yunqi, Liu Furui, Chen Zhitang, Wu Yik-Chung, Hao Jianye, Chen Guangyong, Heng Pheng-Ann
IEEE Trans Image Process. 2023;32:235-250. doi: 10.1109/TIP.2022.3227457. Epub 2022 Dec 19.
Domain generalization aims to learn knowledge invariant across different distributions while semantically meaningful for downstream tasks from multiple source domains, to improve the model's generalization ability on unseen target domains. The fundamental objective is to understand the underlying "invariance" behind these observational distributions and such invariance has been shown to have a close connection to causality. While many existing approaches make use of the property that causal features are invariant across domains, we consider the invariance of the average causal effect of the features to the labels. This invariance regularizes our training approach in which interventions are performed on features to enforce stability of the causal prediction by the classifier across domains. Our work thus sheds some light on the domain generalization problem by introducing invariance of the mechanisms into the learning process. Experiments on several benchmark datasets demonstrate the performance of the proposed method against SOTAs. The codes are available at: https://github.com/lithostark/Contrastive-ACE.
领域泛化旨在学习跨不同分布不变的知识,同时对于来自多个源领域的下游任务在语义上有意义,以提高模型在未见目标领域上的泛化能力。其基本目标是理解这些观测分布背后的潜在“不变性”,并且这种不变性已被证明与因果关系密切相关。虽然许多现有方法利用了因果特征在不同领域间不变的特性,但我们考虑的是特征对标签的平均因果效应的不变性。这种不变性规范了我们的训练方法,在该方法中对特征进行干预,以确保分类器在不同领域间因果预测的稳定性。因此,我们的工作通过将机制的不变性引入学习过程,为领域泛化问题提供了一些启示。在几个基准数据集上的实验证明了所提出方法相对于当前最优方法的性能。代码可在以下网址获取:https://github.com/lithostark/Contrastive-ACE 。