Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15206.
Sony Computer Science Laboratoires, Inc., Tokyo 141-0022, Japan.
Proc Natl Acad Sci U S A. 2023 Dec 19;120(51):e2307804120. doi: 10.1073/pnas.2307804120. Epub 2023 Dec 11.
Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants ( = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.
无论是简单还是复杂的机器智能形式,都越来越多地在人类群体中发挥作用,以影响集体成果。然而,考虑到集体行动问题的性质,这种参与可能会荒谬地、无意识地抑制人类现有的有益社会规范,例如合作规范。在这里,我们使用一个独特的网络物理实验室实验来检验关于这种影响的理论预测,在这个实验中,在线参与者(150 对中的 300 人)在协调博弈中远程驾驶机器人车辆。我们表明,自动刹车辅助会增加人类的利他主义,例如给他人让路,而沟通有助于人们相互让步。另一方面,自动驾驶辅助完全抑制了人与人之间互惠关系的出现,转而支持自身利益最大化。即使在辅助系统被停用后,这种负面的社会影响仍然存在。此外,增加沟通能力并不能缓解这种互惠关系的抑制,因为在自动驾驶辅助的情况下,人们很少进行沟通。我们的研究结果表明,主动安全辅助(一种简单的人工智能支持形式)可以改变人与人之间的社会协调动态,包括通过影响个人安全和社会互惠之间的权衡来实现。自动刹车和自动驾驶辅助之间的区别似乎与辅助技术在社会协调困境中是支持还是取代人类代理有关。人类已经制定了互惠规范来应对集体挑战,但在机器智能参与人类决策而没有任何规范承诺的情况下,这种默契可能会破裂。