IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9922-9931. doi: 10.1109/TPAMI.2023.3253390. Epub 2023 Jun 30.
Highly realistic imaging and video synthesis have become possible and relatively simple tasks with the rapid growth of generative adversarial networks (GANs). GAN-related applications, such as DeepFake image and video manipulation and adversarial attacks, have been used to disrupt and confound the truth in images and videos over social media. DeepFake technology aims to synthesize high visual quality image content that can mislead the human vision system, while the adversarial perturbation attempts to mislead the deep neural networks to a wrong prediction. Defense strategy becomes difficult when adversarial perturbation and DeepFake are combined. This study examined a novel deceptive mechanism based on statistical hypothesis testing against DeepFake manipulation and adversarial attacks. First, a deceptive model based on two isolated sub-networks was designed to generate two-dimensional random variables with a specific distribution for detecting the DeepFake image and video. This research proposes a maximum likelihood loss for training the deceptive model with two isolated sub-networks. Afterward, a novel hypothesis was proposed for a testing scheme to detect the DeepFake video and images with a well-trained deceptive model. The comprehensive experiments demonstrated that the proposed decoy mechanism could be generalized to compressed and unseen manipulation methods for both DeepFake and attack detection.
随着生成对抗网络(GAN)的快速发展,高度逼真的图像和视频合成已经成为可能且相对简单的任务。与 GAN 相关的应用,如 DeepFake 图像和视频操纵以及对抗攻击,已被用于在社交媒体上扰乱和混淆图像和视频中的真相。DeepFake 技术旨在合成高视觉质量的图像内容,这些内容可能会误导人类视觉系统,而对抗性干扰则试图误导深度神经网络做出错误的预测。当对抗性干扰和 DeepFake 结合在一起时,防御策略就变得困难了。本研究考察了一种基于统计假设检验的新型欺骗机制,以对抗 DeepFake 操纵和对抗攻击。首先,设计了一种基于两个孤立子网的欺骗模型,用于生成具有特定分布的二维随机变量,以检测 DeepFake 图像和视频。本研究提出了一种最大似然损失,用于训练具有两个孤立子网的欺骗模型。然后,提出了一种新的假设,用于测试方案,以使用训练有素的欺骗模型检测 DeepFake 视频和图像。全面的实验表明,所提出的诱饵机制可以推广到 DeepFake 和攻击检测的压缩和未见过的操纵方法。