Makovi Kinga, Bonnefon Jean-François, Oudah Mayada, Sargsyan Anahit, Rahwan Talal
Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
Toulouse School of Economics, CNRS (TSM-R), University of Toulouse Capitole, Toulouse, France.
iScience. 2025 Jun 6;28(7):112833. doi: 10.1016/j.isci.2025.112833. eCollection 2025 Jul 18.
High levels of human-machine cooperation are required to combine the strengths of human and artificial intelligence. Here, we investigate strategies to overcome the machine penalty, where people are less cooperative with partners they assume to be machines, than with partners they assume to be humans. Using a large-scale iterative public goods game with nearly 2,000 participants, we find that peer rewards or peer punishments can both promote cooperation with partners assumed to be machines but do not overcome the machine penalty. Their combination, however, eliminates the machine penalty, because it is uniquely effective for partners assumed to be machines and inefficient for partners assumed to be humans. These findings provide a nuanced road map for designing a cooperative environment for humans and machines, depending on the exact goals of the designer.
需要高水平的人机合作来结合人类和人工智能的优势。在此,我们研究克服机器惩罚的策略,即人们与他们认为是机器的伙伴合作的意愿低于与他们认为是人类的伙伴合作的意愿。通过一项有近2000名参与者的大规模迭代公共物品博弈,我们发现同伴奖励或同伴惩罚都能促进与被认为是机器的伙伴的合作,但无法克服机器惩罚。然而,它们的结合消除了机器惩罚,因为它对被认为是机器的伙伴具有独特的有效性,而对被认为是人类的伙伴则效率低下。这些发现为根据设计者的确切目标设计人机合作环境提供了一份细致入微的路线图。