Behavioural Science Institute, Radboud University Nijmegen, The Netherlands.
Cyberpsychol Behav Soc Netw. 2021 May;24(5):332-336. doi: 10.1089/cyber.2020.0035. Epub 2020 Nov 18.
Robots are becoming an integral part of society, yet the extent to which we are prosocial toward these nonliving objects is unclear. While previous research shows that we tend to take care of robots in high-risk, high-consequence situations, this has not been investigated in more day-to-day, low-consequence situations. Thus, we utilized an experimental paradigm (the Social Mindfulness "SoMi" paradigm) that involved a trade-off between participants' own interests and their willingness to take their task partner's needs into account. In two experiments, we investigated whether participants would take the needs of a robotic task partner into account to the same extent as when the task partner was a human (Study I), and whether this was modulated by participant's anthropomorphic attributions to said robot (Study II). In Study I, participants were presented with a social decision-making task, which they performed once by themselves (solo context) and once with a task partner (either a human or a robot). Subsequently, in Study II, participants performed the same task, but this time with both a human and a robotic task partner. The task partners were introduced via neutral or anthropomorphic priming stories. Results indicate that the effect of humanizing a task partner indeed increases our tendency to take someone else's needs into account in a social decision-making task. However, this effect was only found for a human task partner, not for a robot. Thus, while anthropomorphizing a robot may lead us to save it when it is about to perish, it does not make us more socially considerate of it in day-to-day situations.
机器人正在成为社会不可或缺的一部分,但我们对这些非生物对象的亲社会程度尚不清楚。虽然之前的研究表明,我们倾向于在高风险、高后果的情况下照顾机器人,但这在更日常、低后果的情况下尚未得到调查。因此,我们利用了一种实验范式(社会正念“SoMi”范式),该范式涉及参与者自身利益与他们愿意考虑其任务伙伴需求之间的权衡。在两项实验中,我们研究了参与者是否会像对待人类任务伙伴一样,考虑到机器人任务伙伴的需求(研究一),以及参与者对机器人的拟人化归因是否会对此产生影响(研究二)。在研究一中,参与者进行了一项社会决策任务,他们单独完成一次(单人情境),然后与任务伙伴一起完成一次(人类或机器人)。随后,在研究二中,参与者再次完成相同的任务,但这次有一个人类和一个机器人任务伙伴。任务伙伴通过中性或拟人化的启动故事介绍。结果表明,确实将任务伙伴人性化会增加我们在社会决策任务中考虑他人需求的倾向。然而,这种效果仅在人类任务伙伴中发现,而不是机器人。因此,虽然将机器人拟人化可能会促使我们在它即将灭亡时拯救它,但这并不能使我们在日常情况下更具社会关怀。