• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

静态网络结构无法稳定大语言模型智能体之间的合作。

Static network structure cannot stabilize cooperation among large language model agents.

作者信息

Han Jin, Battu Balaraju, Romić Ivan, Rahwan Talal, Holme Petter

机构信息

Department of Computer Science, Aalto University, Espoo, Finland.

New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.

出版信息

PLoS One. 2025 May 22;20(5):e0320094. doi: 10.1371/journal.pone.0320094. eCollection 2025.

DOI:10.1371/journal.pone.0320094
PMID:40402952
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12097601/
Abstract

Large language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally cooperate more than expected in laboratory settings, showing less cooperation in well-mixed populations but more in fixed networks. In contrast, LLMs tend to exhibit greater cooperation in well-mixed settings. This raises a key question: Are LLMs about to emulate human behavior in cooperative dilemmas on networks? In this study, we examine networked interactions where agents repeatedly engage in the Prisoner's Dilemma within both well-mixed and structured network configurations, aiming to identify parallels in cooperative behavior between LLMs and humans. Our findings indicate critical distinctions: while humans tend to cooperate more within structured networks, LLMs display increased cooperation mainly in well-mixed environments, with limited adjustment to networked contexts. Notably, LLM cooperation also varies across model types, illustrating the complexities of replicating human-like social adaptability in artificial agents. These results highlight a crucial gap: LLMs struggle to emulate the nuanced, adaptive social strategies humans deploy in fixed networks. Unlike human participants, LLMs do not alter their cooperative behavior in response to network structures or evolving social contexts, missing the reciprocity norms that humans adaptively employ. This limitation points to a fundamental need in future LLM design-to integrate a deeper comprehension of social norms, enabling more authentic modeling of human-like cooperation and adaptability in networked environments.

摘要

大语言模型(LLMs)越来越多地被用于对人类社会行为进行建模,最近的研究探索了它们模拟社会动态的能力。在此,我们测试大语言模型在个体利益与集体利益冲突的社会困境中是否反映人类行为。在实验室环境中,人类通常比预期更愿意合作,在混合良好的群体中合作较少,但在固定网络中合作较多。相比之下,大语言模型在混合良好的环境中往往表现出更多的合作。这就引出了一个关键问题:大语言模型是否即将在网络中的合作困境中模仿人类行为?在本研究中,我们研究了在混合良好和结构化网络配置中,智能体反复进行囚徒困境博弈的网络交互,旨在识别大语言模型和人类在合作行为上的相似之处。我们的研究结果表明了关键差异:虽然人类在结构化网络中往往更愿意合作,但大语言模型主要在混合良好的环境中表现出更多合作,对网络环境的适应性有限。值得注意的是,大语言模型的合作也因模型类型而异,这说明了在人工智能体中复制类人社会适应性的复杂性。这些结果凸显了一个关键差距:大语言模型难以模仿人类在固定网络中部署的细微、适应性强的社会策略。与人类参与者不同,大语言模型不会根据网络结构或不断演变的社会环境改变其合作行为,缺乏人类适应性采用的互惠规范。这一局限性指出了未来大语言模型设计中的一个基本需求——整合对社会规范的更深入理解,以便在网络环境中更真实地模拟类人合作和适应性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/637455758529/pone.0320094.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/19c2403d6e63/pone.0320094.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/6b111e4b9799/pone.0320094.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/6d959ae49288/pone.0320094.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/73333ff06a97/pone.0320094.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/ef6ee88224ab/pone.0320094.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/ec9b81ab871a/pone.0320094.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/637455758529/pone.0320094.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/19c2403d6e63/pone.0320094.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/6b111e4b9799/pone.0320094.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/6d959ae49288/pone.0320094.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/73333ff06a97/pone.0320094.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/ef6ee88224ab/pone.0320094.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/ec9b81ab871a/pone.0320094.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c815/12097601/637455758529/pone.0320094.g007.jpg

相似文献

1
Static network structure cannot stabilize cooperation among large language model agents.静态网络结构无法稳定大语言模型智能体之间的合作。
PLoS One. 2025 May 22;20(5):e0320094. doi: 10.1371/journal.pone.0320094. eCollection 2025.
2
LLMs and generative agent-based models for complex systems research.用于复杂系统研究的大语言模型和基于生成代理的模型。
Phys Life Rev. 2024 Dec;51:283-293. doi: 10.1016/j.plrev.2024.10.013. Epub 2024 Oct 28.
3
Direct reciprocity and model-predictive rationality explain network reciprocity over social ties.直接互惠和基于模型的理性可以解释网络互惠如何跨越社会关系。
Sci Rep. 2019 Apr 1;9(1):5367. doi: 10.1038/s41598-019-41547-w.
4
MDMA Increases Cooperation and Recruitment of Social Brain Areas When Playing Trustworthy Players in an Iterated Prisoner's Dilemma.MDMA 增加了在迭代囚徒困境中与可信赖玩家一起玩时的合作和社交脑区的招募。
J Neurosci. 2019 Jan 9;39(2):307-320. doi: 10.1523/JNEUROSCI.1276-18.2018. Epub 2018 Nov 19.
5
Impact of social reward on the evolution of cooperation in voluntary prisoner's dilemma.社会奖励对自愿囚徒困境中合作演变的影响
Biosystems. 2023 Jan;223:104821. doi: 10.1016/j.biosystems.2022.104821. Epub 2022 Dec 1.
6
Group size effect on cooperation in one-shot social dilemmas.单次社会困境中群体规模对合作的影响。
Sci Rep. 2015 Jan 21;5:7937. doi: 10.1038/srep07937.
7
Children cooperate more with in-group members than with out-group members in an iterated face-to-face Prisoner's Dilemma Game.在重复的面对面囚徒困境博弈中,儿童与组内成员的合作比与组外成员的合作更多。
J Exp Child Psychol. 2024 May;241:105858. doi: 10.1016/j.jecp.2023.105858. Epub 2024 Feb 3.
8
Cooperative bots exhibit nuanced effects on cooperation across strategic frameworks.合作型机器人在不同战略框架下对合作呈现出细微的影响。
J R Soc Interface. 2025 Jan;22(222):20240427. doi: 10.1098/rsif.2024.0427. Epub 2025 Jan 29.
9
An evolutionary model of personality traits related to cooperative behavior using a large language model.利用大型语言模型构建与合作行为相关的人格特质的进化模型。
Sci Rep. 2024 Mar 19;14(1):5989. doi: 10.1038/s41598-024-55903-y.
10
Emergence of cooperation in the one-shot Prisoner's dilemma through Discriminatory and Samaritan AIs.通过歧视性和利他性 AI,实现一次性囚徒困境中的合作。
J R Soc Interface. 2024 Sep;21(218):20240212. doi: 10.1098/rsif.2024.0212. Epub 2024 Sep 25.

本文引用的文献

1
Playing repeated games with large language models.与大语言模型进行重复博弈。
Nat Hum Behav. 2025 May 8. doi: 10.1038/s41562-025-02172-y.
2
Large Language Models and Empathy: Systematic Review.大语言模型与同理心:系统综述
J Med Internet Res. 2024 Dec 11;26:e52597. doi: 10.2196/52597.
3
Strategic behavior of large language models and the role of game structure versus contextual framing.大语言模型的策略行为以及博弈结构与情境框架的作用。
Sci Rep. 2024 Aug 9;14(1):18490. doi: 10.1038/s41598-024-69032-z.
4
Large language models can infer psychological dispositions of social media users.大型语言模型可以推断社交媒体用户的心理倾向。
PNAS Nexus. 2024 Jun 13;3(6):pgae231. doi: 10.1093/pnasnexus/pgae231. eCollection 2024 Jun.
5
Can Generative AI improve social science?生成式人工智能能改进社会科学吗?
Proc Natl Acad Sci U S A. 2024 May 21;121(21):e2314021121. doi: 10.1073/pnas.2314021121. Epub 2024 May 9.
6
Induction of social contagion for diverse outcomes in structured experiments in isolated villages.在与世隔绝村庄的结构化实验中对多种结果进行社会传染诱导。
Science. 2024 May 3;384(6695):eadi5147. doi: 10.1126/science.adi5147.
7
AI emerges as the frontier in behavioral science.人工智能成为行为科学的前沿领域。
Proc Natl Acad Sci U S A. 2024 Mar 5;121(10):e2401336121. doi: 10.1073/pnas.2401336121. Epub 2024 Feb 26.
8
A Turing test of whether AI chatbots are behaviorally similar to humans.人工智能聊天机器人是否在行为上与人类相似的图灵测试。
Proc Natl Acad Sci U S A. 2024 Feb 27;121(9):e2313925121. doi: 10.1073/pnas.2313925121. Epub 2024 Feb 22.
9
Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale.利用人工智能促进民主对话:聊天干预可以大规模改善在线政治对话。
Proc Natl Acad Sci U S A. 2023 Oct 10;120(41):e2311627120. doi: 10.1073/pnas.2311627120. Epub 2023 Oct 3.
10
Humans perceive warmth and competence in artificial intelligence.人类在人工智能中感知到温暖和能力。
iScience. 2023 Jul 4;26(8):107256. doi: 10.1016/j.isci.2023.107256. eCollection 2023 Aug 18.