Jiang Yue-Ren, Chen Shu-Yu, Fu Hongbo, Gao Lin
IEEE Trans Vis Comput Graph. 2024 Jul;30(7):3444-3456. doi: 10.1109/TVCG.2023.3235364. Epub 2024 Jun 27.
The development of deep generative models has inspired various facial image editing methods, but many of them are difficult to be directly applied to video editing due to various challenges ranging from imposing 3D constraints, preserving identity consistency, ensuring temporal coherence, etc. To address these challenges, we propose a new framework operating on the StyleGAN2 latent space for identity-aware and shape-aware edit propagation on face videos. In order to reduce the difficulties of maintaining the identity, keeping the original 3D motion, and avoiding shape distortions, we disentangle the StyleGAN2 latent vectors of human face video frames to decouple the appearance, shape, expression, and motion from identity. An edit encoding module is used to map a sequence of image frames to continuous latent codes with 3D parametric control and is trained in a self-supervised manner with identity loss and triple shape losses. Our model supports propagation of edits in various forms: I. direct appearance editing on a specific keyframe, II. implicit editing of face shape via a given reference image, and III. existing latent-based semantic edits. Experiments show that our method works well for various forms of videos in the wild and outperforms an animation-based approach and the recent deep generative techniques.
深度生成模型的发展启发了各种面部图像编辑方法,但由于存在诸如施加3D约束、保持身份一致性、确保时间连贯性等各种挑战,其中许多方法难以直接应用于视频编辑。为了应对这些挑战,我们提出了一个在StyleGAN2潜在空间上运行的新框架,用于在面部视频上进行身份感知和形状感知的编辑传播。为了减少保持身份、保持原始3D运动以及避免形状扭曲的困难,我们对人脸视频帧的StyleGAN2潜在向量进行解缠,以将外观、形状、表情和运动与身份解耦。一个编辑编码模块用于将一系列图像帧映射到具有3D参数控制的连续潜在代码,并通过身份损失和三重形状损失以自监督方式进行训练。我们的模型支持各种形式的编辑传播:一、在特定关键帧上直接进行外观编辑;二、通过给定参考图像对面部形状进行隐式编辑;三、现有的基于潜在的语义编辑。实验表明,我们的方法在各种自然视频中表现良好,并且优于基于动画的方法和最近的深度生成技术。