O'Reilly Ziggy, Marchesi Serena, Wykowska Agnieszka
Italian Institute of Technology, Social Cognition in Human-Robot Interaction (S4HRI), Via Enrico Melen 83, 16152, Genoa, Italy.
Department of Psychology, University of Turin, Via Verdi 8, 10124, Turin, Italy.
Sci Rep. 2025 Feb 3;15(1):4128. doi: 10.1038/s41598-024-79027-5.
In the era of renewed fascination with AI and robotics, one needs to address questions related to their societal impact, particularly in terms of moral responsibility and intentionality. In seven vignette-based experiments we investigated whether the consequences of a robot or human's actions, influenced participant's intentionality and moral responsibility ratings. For the robot, when the vignettes contained mentalistic descriptions, moral responsibility ratings were higher for negative actions consequences than positive action consequences, however, there was no difference in intentionality ratings. Whereas, for the human, both moral responsibility and intentionality ratings were higher for negative action consequences. Once the mentalistic descriptions were removed from the vignettes and the moral responsibility question was clarified, we found a reversed asymmetry. For both robots and humans, participants attributed more intentionality and praise, for positive action consequences than negative action consequences. We suggest that this reversal could be due to people defaulting to charitable explanations, when explicit references to culpable mental states are removed from the vignettes.
在人工智能和机器人技术再度引发人们浓厚兴趣的时代,有必要探讨与它们的社会影响相关的问题,尤其是在道德责任和意图方面。在七项基于小场景的实验中,我们研究了机器人或人类行为的后果是否会影响参与者对意图和道德责任的评分。对于机器人,当小场景包含心理主义描述时,负面行为后果的道德责任评分高于正面行为后果,但意图评分没有差异。而对于人类,负面行为后果的道德责任和意图评分都更高。一旦从小场景中删除心理主义描述并澄清道德责任问题,我们发现了一种相反的不对称性。对于机器人和人类,参与者对正面行为后果比负面行为后果赋予了更多的意图和赞扬。我们认为,这种逆转可能是由于当从小场景中删除对有罪心理状态的明确提及后,人们默认采用善意的解释。