Firc Anton, Malinka Kamil, Hanáček Petr
Brno University of Technology, Božetěchova 2, Brno, 612 00, Czech Republic.
Heliyon. 2023 Apr 3;9(4):e15090. doi: 10.1016/j.heliyon.2023.e15090. eCollection 2023 Apr.
Deepfakes present an emerging threat in cyberspace. Recent developments in machine learning make deepfakes highly believable, and very difficult to differentiate between what is real and what is fake. Not only humans but also machines struggle to identify deepfakes. Current speaker and facial recognition systems might be easily fooled by carefully prepared synthetic media - deepfakes. We provide a detailed overview of the state-of-the-art deepfake creation and detection methods for selected visual and audio domains. In contrast to other deepfake surveys, we focus on the threats that deepfakes represent to biometrics systems (e.g., spoofing). We discuss both facial and speech deepfakes, and for each domain, we define deepfake categories and their differences. For each deepfake category, we provide an overview of available tools for creation, datasets, and detection methods. Our main contribution is a definition of attack vectors concerning the differences between categories and reported real-world attacks to evaluate each category's threats to selected categories of biometrics systems.
深度伪造在网络空间中构成了一种新出现的威胁。机器学习的最新进展使得深度伪造极具可信度,而且很难区分真实与虚假内容。不仅人类难以识别深度伪造,机器也面临同样的难题。当前的语音和面部识别系统可能很容易被精心制作的合成媒体——深度伪造所欺骗。我们详细概述了针对选定视觉和音频领域的最先进深度伪造创建和检测方法。与其他深度伪造调查不同,我们关注深度伪造对生物识别系统构成的威胁(例如,欺骗)。我们讨论了面部和语音深度伪造,并且针对每个领域,我们定义了深度伪造类别及其差异。对于每个深度伪造类别,我们概述了可用的创建工具、数据集和检测方法。我们的主要贡献是定义了与类别差异相关的攻击向量,并报告了现实世界中的攻击案例,以评估每个类别对选定生物识别系统类别的威胁。