Swiderska Aleksandra, Küster Dennis
Department of Psychology, University of Warsaw.
Department of Computer Science, University of Bremen.
Cogn Sci. 2020 Jul;44(7):e12872. doi: 10.1111/cogs.12872.
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
机器人伤害人类的决定有时被视为其获得类人思维的终极证明。在此,我们将道德类型化理论中关于心理能力归因的预测与非人化文献中对能动性的否定进行了对比。实验1和实验2基于文本和图像短文,研究了对故意和意外造成伤害的机器人代理的心理认知。实验3明确了代理意图(恶意和善意),并使用简短的计算机生成动画改变了代理类型(机器人和人类)。与善意代理相比,有害机器人代理被赋予心理状态的程度始终较低,这支持了非人化观点。进一步的结果表明,与另一个人类相比,当与机器人代理一起描绘时,人类道德受动者似乎遭受的痛苦更少。研究结果表明,未来的机器人可能会受到类似人类的非人化机制的影响,这对道德互动领域中关于拟人化的既定信念提出了挑战。