Rosero Andres, Dula Elizabeth, Kelly Harris, Malle Bertram F, Phillips Elizabeth K
Applied Psychology and Autonomous Systems Lab, Department of Psychology, College of Humanities and Social Sciences, George Mason University, Fairfax, VA, United States.
UVA Department of Psychology, University of Virginia, Charlottesville, VA, United States.
Front Robot AI. 2024 Sep 5;11:1409712. doi: 10.3389/frobt.2024.1409712. eCollection 2024.
Robots are being introduced into increasingly social environments. As these robots become more ingrained in social spaces, they will have to abide by the social norms that guide human interactions. At times, however, robots will violate norms and perhaps even deceive their human interaction partners. This study provides some of the first evidence for how people perceive and evaluate robot deception, especially three types of deception behaviors theorized in the technology ethics literature: External state deception (cues that intentionally misrepresent or omit details from the external world: e.g., lying), Hidden state deception (cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses), and Superficial state deception (cues that suggest a robot has some capacity or internal state that it lacks).
Participants (N = 498) were assigned to read one of three vignettes, each corresponding to one of the deceptive behavior types. Participants provided responses to qualitative and quantitative measures, which examined to what degree people approved of the behaviors, perceived them to be deceptive, found them to be justified, and believed that other agents were involved in the robots' deceptive behavior.
Participants rated hidden state deception as the most deceptive and approved of it the least among the three deception types. They considered external state and superficial state deception behaviors to be comparably deceptive; but while external state deception was generally approved, superficial state deception was not. Participants in the hidden state condition often implicated agents other than the robot in the deception.
This study provides some of the first evidence for how people perceive and evaluate the deceptiveness of robot deception behavior types. This study found that people people distinguish among the three types of deception behaviors and see them as differently deceptive and approve of them differently. They also see at least the hidden state deception as stemming more from the designers than the robot itself.
机器人正被引入越来越多的社交环境中。随着这些机器人在社交空间中变得越来越根深蒂固,它们将不得不遵守指导人类互动的社会规范。然而,有时机器人会违反规范,甚至可能欺骗它们的人类互动伙伴。本研究提供了一些初步证据,说明人们如何感知和评估机器人的欺骗行为,特别是技术伦理文献中理论化的三种欺骗行为类型:外部状态欺骗(故意歪曲或省略外部世界细节的线索:例如,说谎)、隐藏状态欺骗(旨在隐藏或掩盖机器人拥有的能力或内部状态的线索)和表面状态欺骗(暗示机器人具有某种它实际上缺乏的能力或内部状态的线索)。
参与者(N = 498)被分配阅读三个小故事之一,每个小故事对应一种欺骗行为类型。参与者对定性和定量测量做出反应,这些测量考察了人们对这些行为的认可程度、认为它们具有欺骗性的程度、认为它们合理的程度,以及是否认为其他主体参与了机器人的欺骗行为。
参与者将隐藏状态欺骗评为三种欺骗类型中最具欺骗性且最不被认可的。他们认为外部状态欺骗和表面状态欺骗行为具有相当的欺骗性;但虽然外部状态欺骗总体上被认可,表面状态欺骗却不然。处于隐藏状态条件下的参与者通常认为除机器人外还有其他主体参与了欺骗。
本研究提供了一些初步证据,说明人们如何感知和评估机器人欺骗行为类型的欺骗性。该研究发现,人们能够区分这三种欺骗行为类型,认为它们具有不同的欺骗性且认可程度不同。他们还认为至少隐藏状态欺骗更多地源于设计者而非机器人本身。