Wang Haoyi, Sanchez Victor, Li Chang-Tsun
IEEE Trans Image Process. 2021;30:5413-5425. doi: 10.1109/TIP.2021.3084106. Epub 2021 Jun 7.
The vanilla Generative Adversarial Networks (GANs) are commonly used to generate realistic images depicting aged and rejuvenated faces. However, the performance of such vanilla GANs in the age-oriented face synthesis task is often compromised by the mode collapse issue, which may produce poorly synthesized faces with indistinguishable visual variations. In addition, recent age-oriented face synthesis methods use the L1 or L2 constraint to preserve the identity information in synthesized faces, which implicitly limits the identity permanence capabilities when these constraints are associated with a trivial weighting factor. In this paper, we propose a method for the age-oriented face synthesis task that achieves high synthesis accuracy with strong identity permanence capabilities. Specifically, to achieve high synthesis accuracy, our method tackles the mode collapse issue with a novel Conditional Discriminator Pool, which consists of multiple discriminators, each targeting one particular age category. To achieve strong identity permanence capabilities, our method uses a novel Adversarial Triplet loss. This loss, which is based on the Triplet loss, adds a ranking operation to further pull the positive embedding towards the anchor embedding to significantly reduce intra-class variances in the feature space. Through extensive experiments, we show that our proposed method outperforms state-of-the-art methods in terms of synthesis accuracy and identity permanence capabilities, both qualitatively and quantitatively.
普通生成对抗网络(GAN)通常用于生成描绘老年和年轻化面部的逼真图像。然而,这种普通GAN在面向年龄的面部合成任务中的性能常常受到模式崩溃问题的影响,这可能会产生视觉差异难以区分的合成效果不佳的面部。此外,最近面向年龄的面部合成方法使用L1或L2约束来保留合成面部中的身份信息,当这些约束与一个微不足道的加权因子相关联时,这会隐含地限制身份持久性能力。在本文中,我们提出了一种面向年龄的面部合成任务方法,该方法具有高合成精度和强大的身份持久性能力。具体而言,为了实现高合成精度,我们的方法使用一种新颖的条件判别器池来解决模式崩溃问题,该池由多个判别器组成,每个判别器针对一个特定年龄类别。为了实现强大的身份持久性能力,我们的方法使用一种新颖的对抗三元组损失。这种基于三元组损失的损失增加了一个排序操作,以进一步将正嵌入拉向锚嵌入,从而显著降低特征空间中的类内方差。通过大量实验,我们表明我们提出的方法在合成精度和身份持久性能力方面在定性和定量上均优于现有方法。