Matthews Taylor
Department of Philosophy, University of Nottingham, Nottingham, England.
Synthese. 2023;201(2):41. doi: 10.1007/s11229-022-04033-x. Epub 2023 Jan 23.
Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard's recent anti-risk virtue epistemology meets the challenge. While Pritchard's account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world.
人工智能技术的最新发展导致了越来越复杂的视频操纵形式。其中一种形式就是深度伪造的出现。深度伪造是人工智能生成的视频,通常描绘人们做出和说出他们从未做过的事情。在本文中,我论证了深度伪造与认识论中更传统的假谷仓案例之间存在密切的结构关系。具体而言,我认为深度伪造所产生的认知风险程度与传统案例中发现的类似。鉴于谷仓案例长期以来对知识的德性理论解释构成了挑战,我思考类似的挑战是否也适用于深度伪造。在此过程中,我考虑了邓肯·普里查德最近的反风险德性认识论是如何应对这一挑战的。虽然普里查德的解释避免了传统谷仓案例中的问题,但我认为在深度伪造的情况下,它会导致对来自在线视频的知识产生局部怀疑论。最后,我思考了两种替代性的德性理论方法如何在日益数字化的世界中证明我们对视频的认知依赖是合理的。