Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
School of Social Sciences and Technology, Technical University of Munich, Munich, Germany.
Nat Commun. 2023 May 30;14(1):3108. doi: 10.1038/s41467-023-38592-5.
With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors' assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.
随着人工智能的进步和全球在线社区的出现,人类和机器越来越多地参与到混合群体中,在这些群体中,他们可以相互帮助或相互阻碍。人类社会已经有几千年的时间来巩固促进合作的社会规范;但混合群体往往难以阐明当人类与机器共存时应遵守的规范。在涉及 7917 人的五项研究中,我们记录了人们在一个受益者、帮助者、惩罚者和信任者的理想化社会中,对机器和人类的不同对待方式。我们表明,当帮助者和惩罚者遵守规范而不是不遵守规范时,他们会获得不同程度的信任。我们还证明,遵守规范的人的信任收益与信任者对合作规范的共识评估有关,即帮助和惩罚。最后,我们确定,在某些条件下,告知信任者关于帮助的规范共识,往往会减少对与他们互动的机器和人的区别对待。这些结果使我们能够预测人类如何为人类-机器群体制定合作规范,具体来说,就是依赖于仅有人类群体中已经存在的规范。我们还表明,通过让人们意识到他们正在形成的共识,可以加速这种演变。