Wang Chaoyue, Xu Chang, Wanga Chaohui, Tao Dacheng
IEEE Trans Image Process. 2018 Aug;27(8):4066-4079. doi: 10.1109/TIP.2018.2836316. Epub 2018 May 14.
In this paper, we propose Perceptual Adversarial Networks (PAN) for image-to-image transformations. Different from existing application driven algorithms, PAN provides a generic framework of learning to map from input images to desired images (Fig. 1), such as a rainy image to its de-rained counterpart, object edges to photos, semantic labels to a scenes image, etc. The proposed PAN consists of two feed-forward convolutional neural networks (CNNs): the image transformation network T and the discriminative network D. Besides the generative adversarial loss widely used in GANs, we propose the perceptual adversarial loss, which undergoes an adversarial training process between the image transformation network T and the hidden layers of the discriminative network D. The hidden layers and the output of the discriminative network D are upgraded to constantly and automatically discover the discrepancy between the transformed image and the corresponding ground-truth, while the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through integrating the generative adversarial loss and the perceptual adversarial loss, D and T can be trained alternately to solve image-to-image transformation tasks. Experiments evaluated on several image-to-image transformation tasks (e.g., image de-raining, image inpainting, etc) demonstrate the effectiveness of the proposed PAN and its advantages over many existing works.
在本文中,我们提出了用于图像到图像转换的感知对抗网络(PAN)。与现有的应用驱动算法不同,PAN提供了一个通用的学习框架,用于将输入图像映射到期望的图像(图1),例如将有雨的图像转换为无雨的对应图像、将物体边缘转换为照片、将语义标签转换为场景图像等。所提出的PAN由两个前馈卷积神经网络(CNN)组成:图像转换网络T和判别网络D。除了GAN中广泛使用的生成对抗损失外,我们还提出了感知对抗损失,它在图像转换网络T和判别网络D的隐藏层之间进行对抗训练过程。判别网络D的隐藏层和输出不断升级,以自动发现转换后的图像与相应的真实图像之间的差异,而图像转换网络T则被训练以最小化判别网络D所探索到的差异。通过整合生成对抗损失和感知对抗损失,可以交替训练D和T来解决图像到图像的转换任务。在几个图像到图像转换任务(如图像去雨、图像修复等)上进行的实验证明了所提出的PAN的有效性及其相对于许多现有工作的优势。