LSI Department, University of the Basque Country, Donostia-San Sebastián, Spain.
Department of Computer Science and Engineering, Universitat de Lleida, Lleida, Spain.
Comput Intell Neurosci. 2022 Oct 18;2022:1332122. doi: 10.1155/2022/1332122. eCollection 2022.
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.
最近人工智能技术的进步使得创建深度伪造和超逼真视频变得容易,这些视频中的图像和视频片段被处理以创建看起来真实的假视频。其中许多是在未经他人同意的情况下使用其外观和声音来进行换脸的。由于情感是人类交流固有的,因此研究深度伪造如何将情感表达从原始视频转移到伪造视频是相关的。在这项工作中,我们使用知名的基于人脸交换的深度伪造数据库对深度伪造中的面部情感表达进行了深入研究。首先,我们从视频中提取了静态图像。然后,我们分析了数据库中所有表演者的原始和伪造视频记录中的情感表达。结果表明,原始记录和从中创建的深度伪造之间的情感表达没有充分转移。在原始和伪造记录之间检测到的情绪和表演者的高度可变性表明,应该考虑表演者的情感表达能力,以实现更好的深度伪造生成或检测。