Köbis Nils C, Doležalová Barbora, Soraperra Ivan
Center for Humans and Machines, Max Planck Institute for Human Development, 14195 Berlin, Germany.
Amsterdam School of Economics, University of Amsterdam, 1001 NJ Amsterdam, The Netherlands.
iScience. 2021 Oct 29;24(11):103364. doi: 10.1016/j.isci.2021.103364. eCollection 2021 Nov 19.
Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment ( = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a "seeing-is-believing" heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.
对视听内容进行超逼真的操控,即深度伪造,给确定网络内容的真实性带来了新挑战。关于深度伪造对人类影响的研究仍然很少。在一项预先注册的行为实验(N = 210)中,我们发现:(1)人们无法可靠地检测出深度伪造内容;(2)提高认知或引入经济激励措施都无法提高他们的检测准确性。深入研究潜在的认知过程,我们发现:(3)人们倾向于将深度伪造内容误判为真实视频(而非相反);(4)他们高估了自己的检测能力。这些结果共同表明,人们在检测深度伪造内容时采用了“眼见为实”的启发式方法,同时对自己(较低的)检测能力过度自信。这种组合使人们特别容易受到深度伪造内容的影响。