Center for Ethics, Department of Philosophy, University of Zurich.
Digital Society Initiative, University of Zurich.
Cogn Sci. 2021 Oct;45(10):e13032. doi: 10.1111/cogs.13032.
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.
机器人欺骗的潜在能力最近引起了相当多的关注。许多论文探讨了机器人为了有益的目的(例如在教育或健康领域)而进行欺骗的技术可能性。在这篇简短的实验论文中,我关注的是一个更典型的案例:机器人说谎(说谎是欺骗的典型例子),从人类的角度来看,这是出于非有益的目的。更确切地说,我提出了一个实证实验,调查了以下三个问题:(a)普通人是否愿意将欺骗意图归因于人工智能代理?(b)他们是否像人类代理进行口头欺骗时那样,愿意将机器人的谎言视为谎言?(c)人们是否会以与说谎的人类代理相同的程度来指责说谎的人工智能代理?对所有三个问题的回答都是肯定的。我认为,这意味着机器人欺骗及其规范后果值得比目前更多的关注。