Suppr超能文献

机器人会说谎吗?探索应用于人工智能的说谎民间概念。

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.

机构信息

Center for Ethics, Department of Philosophy, University of Zurich.

Digital Society Initiative, University of Zurich.

出版信息

Cogn Sci. 2021 Oct;45(10):e13032. doi: 10.1111/cogs.13032.

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

摘要

机器人欺骗的潜在能力最近引起了相当多的关注。许多论文探讨了机器人为了有益的目的(例如在教育或健康领域)而进行欺骗的技术可能性。在这篇简短的实验论文中,我关注的是一个更典型的案例:机器人说谎(说谎是欺骗的典型例子),从人类的角度来看,这是出于非有益的目的。更确切地说,我提出了一个实证实验,调查了以下三个问题:(a)普通人是否愿意将欺骗意图归因于人工智能代理?(b)他们是否像人类代理进行口头欺骗时那样,愿意将机器人的谎言视为谎言?(c)人们是否会以与说谎的人类代理相同的程度来指责说谎的人工智能代理?对所有三个问题的回答都是肯定的。我认为,这意味着机器人欺骗及其规范后果值得比目前更多的关注。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d21f/9285490/69911bb3c0ef/COGS-45-0-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验