Suppr超能文献

FakeCatcher:利用生物信号检测合成人像视频

FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals.

作者信息

Ciftci Umur Aybars, Demir Ilke, Yin Lijun

出版信息

IEEE Trans Pattern Anal Mach Intell. 2020 Jul 15;PP. doi: 10.1109/TPAMI.2020.3009287.

Abstract

The recent proliferation of fake portrait videos poses direct threats on society, law, and privacy [1]. Believing the fake video of a politician, distributing fake pornographic content of celebrities, fabricating impersonated fake videos as evidence in courts are just a few real world consequences of deep fakes. We present a novel approach to detect synthetic content in portrait videos, as a preventive solution for the emerging threat of deep fakes. In other words, we introduce a deep fake detector. We observe that detectors blindly utilizing deep learning are not effective in catching fake content, as generative models produce formidably realistic results. Our key assertion follows that biological signals hidden in portrait videos can be used as an implicit descriptor of authenticity, because they are neither spatially nor temporally preserved in fake content. To prove and exploit this assertion, we first engage several signal transformations for the pairwise separation problem, achieving 99.39% accuracy. Second, we utilize those findings to formulate a generalized classifier for fake content, by analyzing proposed signal transformations and corresponding feature sets. Third, we generate novel signal maps and employ a CNN to improve our traditional classifier for detecting synthetic content. Lastly, we release an "in the wild" dataset of fake portrait videos that we collected as a part of our evaluation process. We evaluate FakeCatcher on several datasets, resulting with 96%, 94.65%, 91.50%, and 91.07% accuracies, on Face Forensics [2], Face Forensics++ [3], CelebDF [4], and on our new Deep Fakes Dataset respectively. In addition, our approach produces a significantly superior detection rate against baselines, and does not depend on the source, generator, or properties of the fake content. We also analyze signals from various facial regions, under image distortions, with varying segment durations, from different generators, against unseen datasets, and under several dimensionality reduction techniques.

摘要

近期,伪造人像视频的泛滥对社会、法律和隐私构成了直接威胁[1]。相信政治家的伪造视频、传播名人的虚假色情内容、在法庭上伪造模仿视频作为证据,这些只是深度伪造技术在现实世界中造成的部分后果。我们提出了一种新颖的方法来检测人像视频中的合成内容,作为应对深度伪造这一新兴威胁的预防性解决方案。换句话说,我们引入了一种深度伪造检测器。我们发现,盲目使用深度学习的检测器在捕捉虚假内容方面并不有效,因为生成模型能产生极其逼真的结果。我们的关键主张是,隐藏在人像视频中的生物信号可被用作真实性的隐含描述符,因为它们在虚假内容中既不会在空间上也不会在时间上被保留。为了证明并利用这一主张,我们首先针对成对分离问题进行了几种信号变换,准确率达到了99.39%。其次,我们利用这些发现,通过分析所提出的信号变换和相应的特征集,为虚假内容制定了一个通用分类器。第三,我们生成了新颖的信号图,并使用卷积神经网络来改进我们用于检测合成内容的传统分类器。最后,我们发布了一个在自然场景下收集的虚假人像视频数据集,作为评估过程的一部分。我们在多个数据集上对FakeCatcher进行了评估,在Face Forensics[2]、Face Forensics++[3]、CelebDF[4]以及我们新的深度伪造数据集上分别取得了96%、94.65%、91.50%和91.07%的准确率。此外,我们的方法相对于基线产生了显著更高的检测率,并且不依赖于虚假内容的来源、生成器或属性。我们还分析了来自不同面部区域的信号,包括在图像失真、不同片段时长、不同生成器、针对未见数据集以及几种降维技术下的信号。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验