Yu Xi, Tseng Huan-Hsin, Yoo Shinjae, Ling Haibin, Lin Yuewei
IEEE Trans Image Process. 2024;33:3508-3519. doi: 10.1109/TIP.2024.3404241. Epub 2024 Jun 4.
Domain Generalization (DG) aims to learn a generalizable model on the unseen target domain by only training on the multiple observed source domains. Although a variety of DG methods have focused on extracting domain-invariant features, the domain-specific class-relevant features have attracted attention and been argued to benefit generalization to the unseen target domain. To take into account the class-relevant domain-specific information, in this paper we propose an Information theory iNspired diSentanglement and pURification modEl (INSURE) to explicitly disentangle the latent features to obtain sufficient and compact (necessary) class-relevant feature for generalization to the unseen domain. Specifically, we first propose an information theory inspired loss function to ensure the disentangled class-relevant features contain sufficient class label information and the other disentangled auxiliary feature has sufficient domain information. We further propose a paired purification loss function to let the auxiliary feature discard all the class-relevant information and thus the class-relevant feature will contain sufficient and compact (necessary) class-relevant information. Moreover, instead of using multiple encoders, we propose to use a learnable binary mask as our disentangler to make the disentanglement more efficient and make the disentangled features complementary to each other. We conduct extensive experiments on five widely used DG benchmark datasets including PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet. The proposed INSURE achieves state-of-the-art performance. We also empirically show that domain-specific class-relevant features are beneficial for domain generalization. The code is available at https://github.com/yuxi120407/INSURE.
领域泛化(DG)旨在仅通过在多个观察到的源域上进行训练,来学习一个可泛化到未见目标域的模型。尽管多种DG方法都专注于提取域不变特征,但特定于域的类相关特征已引起关注,并被认为有助于泛化到未见目标域。为了考虑类相关的特定于域的信息,在本文中,我们提出了一种受信息论启发的解缠与纯化模型(INSURE),以显式地解缠潜在特征,从而获得足够且紧凑(必要)的类相关特征,以便泛化到未见域。具体而言,我们首先提出一种受信息论启发的损失函数,以确保解缠后的类相关特征包含足够的类标签信息,而另一个解缠后的辅助特征具有足够的域信息。我们进一步提出一个配对纯化损失函数,以使辅助特征丢弃所有类相关信息,这样类相关特征将包含足够且紧凑(必要)的类相关信息。此外,我们不是使用多个编码器,而是提出使用一个可学习的二进制掩码作为我们的解缠器,以使解缠更高效,并使解缠后的特征相互补充。我们在五个广泛使用的DG基准数据集上进行了广泛实验,包括PACS、VLCS、OfficeHome、TerraIncognita和DomainNet。所提出的INSURE取得了当前最优的性能。我们还通过实验表明,特定于域的类相关特征对域泛化有益。代码可在https://github.com/yuxi120407/INSURE获取。