Wang Yuxi, Zhang Zhaoxiang, Hao Wangli, Song Chunfeng
IEEE Trans Image Process. 2021;30:670-684. doi: 10.1109/TIP.2020.3037528. Epub 2020 Dec 4.
The image-to-image translation aims to learn the corresponding information between the source and target domains. Several state-of-the-art works have made significant progress based on generative adversarial networks (GANs). However, most existing one-to-one translation methods ignore the correlations among different domain pairs. We argue that there is common information among different domain pairs and it is vital to multiple domain pairs translation. In this paper, we propose a unified circular framework for multiple domain pairs translation, leveraging a shared knowledge module across numerous domains. One selected translation pair can benefit from the complementary information from other pairs, and the sharing knowledge is conducive to mutual learning between domains. Moreover, absolute consistency loss is proposed and applied in the corresponding feature maps to ensure intra-domain consistency. Furthermore, our model can be trained in an end-to-end manner. Extensive experiments demonstrate the effectiveness of our approach on several complex translation scenarios, such as Thermal IR switching, weather changing, and semantic transfer tasks.
图像到图像的翻译旨在学习源域和目标域之间的对应信息。基于生成对抗网络(GAN)的一些先进工作已经取得了显著进展。然而,大多数现有的一对一翻译方法忽略了不同域对之间的相关性。我们认为不同域对之间存在共同信息,这对于多域对翻译至关重要。在本文中,我们提出了一个用于多域对翻译的统一循环框架,利用跨多个域的共享知识模块。一个选定的翻译对可以从其他对的互补信息中受益,并且共享知识有利于域之间的相互学习。此外,提出了绝对一致性损失并将其应用于相应的特征图以确保域内一致性。此外,我们的模型可以以端到端的方式进行训练。大量实验证明了我们的方法在几种复杂翻译场景中的有效性,如热红外切换、天气变化和语义转移任务。