Suppr超能文献

人类设计人工代理来精确解决集体风险困境,但缺乏精确性。

Humans program artificial delegates to accurately solve collective-risk dilemmas but lack precision.

作者信息

Terrucha Inês, Fernández Domingos Elias, Suchon Rémi, Santos Francisco C, Simoens Pieter, Lenaerts Tom

机构信息

Internet technology and Data Science lab, Department of Information Technology, Ghent University-IMEC, Ghent 9052, Belgium.

Artificial Intelligence lab, Computer Science Department, Vrije Universiteit Brussel, Brussels 1050, Belgium.

出版信息

Proc Natl Acad Sci U S A. 2025 Jun 24;122(25):e2319942121. doi: 10.1073/pnas.2319942121. Epub 2025 Jun 16.

Abstract

In an era increasingly influenced by autonomous machines, it is only a matter of time before strategic individual decisions that impact collective goods will also be made virtually through the use of artificial delegates. Through a series of behavioral experiments that combine delegation to autonomous agents and different choice architectures, we pinpoint what may get lost in translation when humans delegate to algorithms. We focus on the collective-risk dilemma, a game where participants must decide whether or not to contribute to a public good, where the latter must reach a target in order for them to keep their personal endowments. To test the effect of delegation beyond its functionality as a commitment device, participants are asked to play the game a second time, with the same group, where they are given the chance to reprogram their agents. As our main result we find that, when the action space is constrained, people who delegate contribute more to the public good, even if they have experienced more failure and inequality than people who do not delegate. However, they are not more successful. Failing to reach the target, after getting close to it, can be attributed to precision errors in the agent's algorithm that cannot be corrected amid the game. Thus, with the digitization and subsequent limitation of our interactions, artificial delegates appear to be a solution to help preserving public goods over many iterations of risky situations. But actual success can only be achieved if humans learn to adjust their agents' algorithms.

摘要

在一个越来越受自主机器影响的时代,通过使用人工代理进行虚拟决策从而影响集体利益的战略性个人决策出现只是时间问题。通过一系列将代理委托给自主智能体并结合不同选择架构的行为实验,我们查明了人类将任务委托给算法时可能会出现的理解偏差。我们聚焦于集体风险困境,这是一个参与者必须决定是否为公共利益做出贡献的博弈,公共利益必须达到一个目标,参与者才能保留其个人禀赋。为了测试委托行为超出其作为承诺工具的功能所产生的影响,我们要求参与者与同一组人员再次进行该博弈,让他们有机会重新编程自己的智能体。我们的主要研究结果是,当行动空间受到限制时,委托他人的人会为公共利益做出更多贡献,即使他们比未委托他人的人经历了更多的失败和不平等。然而,他们并没有更成功。在接近目标后未能达成目标,可能是由于智能体算法中的精度误差,而在博弈过程中无法纠正。因此,随着我们互动的数字化以及随之而来的限制,人工代理似乎是一种有助于在多次风险情境迭代中保护公共利益的解决方案。但只有当人类学会调整智能体的算法时,才能取得实际的成功。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/412a/12207457/a559f25a3c75/pnas.2319942121fig01.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验