Nirkin Yuval, Keller Yosi, Hassner Tal
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):560-575. doi: 10.1109/TPAMI.2022.3155571. Epub 2022 Dec 5.
We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training on those faces. We derive a novel iterative deep learning-based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce a continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier conference version of our work (Nirkin et al. 2019), as well as additional experiments and results.
我们提出了用于面部交换和重演的面部交换生成对抗网络(FSGAN)。与之前的工作不同,我们提供了一种与主体无关的交换方案,该方案可应用于成对的面部,而无需在这些面部上进行训练。我们推导出一种基于深度学习的新颖迭代方法用于面部重演,该方法可调整显著的姿势和表情变化,这些变化可应用于单个图像或视频序列。对于视频序列,我们基于重演、德劳内三角剖分和重心坐标引入了面部视图的连续插值。遮挡的面部区域由面部补全网络处理。最后,我们使用面部融合网络在保留目标肤色和光照条件的同时,对面部进行无缝融合。该网络使用一种新颖的泊松融合损失,将泊松优化与感知损失相结合。我们将我们的方法与现有的最先进系统进行比较,并表明我们的结果在定性和定量方面都更优。这项工作描述了FSGAN方法的扩展,该方法在我们工作的早期会议版本(Nirkin等人,2019年)中提出,以及额外的实验和结果。