• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

算法利用:人类热衷于利用 benevolent AI。(原文中“benevolent AI”可能有误,推测应为“beneficial AI”之类更合理的表述,直译为“仁慈的人工智能”,这里按原文翻译)

Algorithm exploitation: Humans are keen to exploit benevolent AI.

作者信息

Karpus Jurgis, Krüger Adrian, Verba Julia Tovar, Bahrami Bahador, Deroy Ophelia

机构信息

Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany.

Department of General and Educational Psychology, LMU Munich, Leopoldstraße 13, 80802 Munich, Germany.

出版信息

iScience. 2021 Jun 1;24(6):102679. doi: 10.1016/j.isci.2021.102679. eCollection 2021 Jun 25.

DOI:10.1016/j.isci.2021.102679
PMID:34189440
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8219775/
Abstract

We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.

摘要

尽管存在被剥削或伤害的风险,我们仍会与他人合作。如果未来的人工智能(AI)系统对我们友善且愿意合作,我们会如何回报呢?在这里,我们表明,当与AI互动时,我们的合作倾向会变弱。在九项实验中,人类在四个经典的社会困境经济游戏以及我们在此引入的一个新设计的互惠游戏中,与另一个人或AI代理进行互动。与人们不信任算法的假设相反,参与者信任他们的AI伙伴能像人类一样具有合作性。然而,他们对AI的善意回报不如对人类,并且比对人类更多地剥削了AI。这些发现警告说,未来那些成功取决于人类回报其合作性的自动驾驶汽车或协作机器人,存在被剥削的风险。这种脆弱性不仅需要更智能的机器,还需要更好的以人为本的政策。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/722e0b582d45/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/4684133fd975/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/50d51914a067/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/c3905c0ac889/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/722e0b582d45/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/4684133fd975/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/50d51914a067/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/c3905c0ac889/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3199/8219775/722e0b582d45/gr3.jpg

相似文献

1
Algorithm exploitation: Humans are keen to exploit benevolent AI.算法利用:人类热衷于利用 benevolent AI。(原文中“benevolent AI”可能有误,推测应为“beneficial AI”之类更合理的表述,直译为“仁慈的人工智能”,这里按原文翻译)
iScience. 2021 Jun 1;24(6):102679. doi: 10.1016/j.isci.2021.102679. eCollection 2021 Jun 25.
2
Should artificial intelligence have lower acceptable error rates than humans?人工智能的可接受错误率应该比人类更低吗?
BJR Open. 2023 Apr 13;5(1):20220053. doi: 10.1259/bjro.20220053. eCollection 2023.
3
Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs.对人类和机器的社会偏好:关于机器收益作用的系统实验
Perspect Psychol Sci. 2025 Jan;20(1):165-181. doi: 10.1177/17456916231194949. Epub 2023 Sep 26.
4
How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains.人们如何在熟悉和不熟悉的领域中,用反事实和因果解释来推理人工智能决策。
Mem Cognit. 2023 Oct;51(7):1481-1496. doi: 10.3758/s13421-023-01407-5. Epub 2023 Mar 24.
5
Artificial intelligence in orthodontics : Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network.正畸学中的人工智能:使用定制卷积神经网络对全自动头影测量分析的评估
J Orofac Orthop. 2020 Jan;81(1):52-68. doi: 10.1007/s00056-019-00203-8. Epub 2019 Dec 18.
6
Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human-Technology Interactions Online.对机器人和人工智能的信任:一种在线人机交互的实验方法。
Front Psychol. 2020 Dec 3;11:568256. doi: 10.3389/fpsyg.2020.568256. eCollection 2020.
7
Artificial intelligence in communication impacts language and social relationships.人工智能在通信中的应用影响着语言和社会关系。
Sci Rep. 2023 Apr 4;13(1):5487. doi: 10.1038/s41598-023-30938-9.
8
Effects of Gender and Relationship Type on the Response to Artificial Intelligence.性别和关系类型对人工智能反应的影响。
Cyberpsychol Behav Soc Netw. 2019 Apr;22(4):249-253. doi: 10.1089/cyber.2018.0581. Epub 2019 Mar 13.
9
How AI's Self-Prolongation Influences People's Perceptions of Its Autonomous Mind: The Case of U.S. Residents.人工智能的自我延续如何影响人们对其自主思维的认知:以美国居民为例。
Behav Sci (Basel). 2023 Jun 4;13(6):470. doi: 10.3390/bs13060470.
10
The system of autono‑mobility: computer vision and urban complexity-reflections on artificial intelligence at urban scale.自主移动系统:计算机视觉与城市复杂性——关于城市规模人工智能的思考
AI Soc. 2023;38(3):1111-1122. doi: 10.1007/s00146-022-01590-0. Epub 2023 May 8.

引用本文的文献

1
Evidence of spillovers from (non)cooperative human-bot to human-human interactions.(非)合作性人机交互对人际交互产生溢出效应的证据。
iScience. 2025 Jun 25;28(8):113006. doi: 10.1016/j.isci.2025.113006. eCollection 2025 Aug 15.
2
The science fiction science method.科幻科学方法。
Nature. 2025 Aug;644(8075):51-58. doi: 10.1038/s41586-025-09194-6. Epub 2025 Aug 6.
3
Rewards and punishments help humans overcome biases against cooperation partners assumed to be machines.奖励和惩罚有助于人类克服对被假定为机器的合作伙伴的偏见。

本文引用的文献

1
Confronting barriers to human-robot cooperation: balancing efficiency and risk in machine behavior.直面人机协作的障碍:平衡机器行为中的效率与风险
iScience. 2020 Dec 17;24(1):101963. doi: 10.1016/j.isci.2020.101963. eCollection 2021 Jan 22.
2
Network Engineering Using Autonomous Agents Increases Cooperation in Human Groups.使用自主智能体的网络工程可增强人类群体中的合作。
iScience. 2020 Aug 6;23(9):101438. doi: 10.1016/j.isci.2020.101438.
3
Reliability from α to ω: A tutorial.从 α 到 ω 的可靠性:教程。
iScience. 2025 Jun 6;28(7):112833. doi: 10.1016/j.isci.2025.112833. eCollection 2025 Jul 18.
4
Humans program artificial delegates to accurately solve collective-risk dilemmas but lack precision.人类设计人工代理来精确解决集体风险困境,但缺乏精确性。
Proc Natl Acad Sci U S A. 2025 Jun 24;122(25):e2319942121. doi: 10.1073/pnas.2319942121. Epub 2025 Jun 16.
5
Reputation-based reciprocity in human-bot and human-human networks.人机网络和人际网络中基于声誉的互惠行为。
PNAS Nexus. 2025 May 9;4(5):pgaf150. doi: 10.1093/pnasnexus/pgaf150. eCollection 2025 May.
6
Human cooperation with artificial agents varies across countries.人类与人工智能体的合作在不同国家存在差异。
Sci Rep. 2025 Mar 22;15(1):10000. doi: 10.1038/s41598-025-92977-8.
7
Cooperative bots exhibit nuanced effects on cooperation across strategic frameworks.合作型机器人在不同战略框架下对合作呈现出细微的影响。
J R Soc Interface. 2025 Jan;22(222):20240427. doi: 10.1098/rsif.2024.0427. Epub 2025 Jan 29.
8
The impact of labeling automotive AI as trustworthy or reliable on user evaluation and technology acceptance.将汽车人工智能标记为值得信赖或可靠对用户评估和技术接受度的影响。
Sci Rep. 2025 Jan 9;15(1):1481. doi: 10.1038/s41598-025-85558-2.
9
AI-enhanced collective intelligence.人工智能增强的集体智慧。
Patterns (N Y). 2024 Oct 10;5(11):101074. doi: 10.1016/j.patter.2024.101074. eCollection 2024 Nov 8.
10
A new sociology of humans and machines.人与机器的新社会学。
Nat Hum Behav. 2024 Oct;8(10):1864-1876. doi: 10.1038/s41562-024-02001-8. Epub 2024 Oct 22.
Psychol Assess. 2019 Dec;31(12):1395-1411. doi: 10.1037/pas0000754. Epub 2019 Aug 5.
4
Machine behaviour.机器行为。
Nature. 2019 Apr;568(7753):477-486. doi: 10.1038/s41586-019-1138-y. Epub 2019 Apr 24.
5
Sensorimotor communication beyond the body: The case of driving. Comment on "The body talks: sensorimotor communication and its brain and kinematic signatures" by G. Pezzulo et al.身体之外的感觉运动交流:以驾驶为例。对G. 佩祖洛等人所著《身体在说话:感觉运动交流及其大脑和运动学特征》的评论
Phys Life Rev. 2019 Mar;28:31-33. doi: 10.1016/j.plrev.2019.01.013. Epub 2019 Jan 30.
6
Reciprocity of social influence.社会影响的互惠性。
Nat Commun. 2018 Jun 26;9(1):2474. doi: 10.1038/s41467-018-04925-y.
7
Conducting interactive experiments online.在线进行交互式实验。
Exp Econ. 2018;21(1):99-131. doi: 10.1007/s10683-017-9527-2. Epub 2017 May 9.
8
Cooperating with machines.与机器协作。
Nat Commun. 2018 Jan 16;9(1):233. doi: 10.1038/s41467-017-02597-8.
9
Negotiating the Traffic: Can Cognitive Science Help Make Autonomous Vehicles a Reality?协商交通:认知科学能否帮助自动驾驶汽车成为现实?
Trends Cogn Sci. 2018 Feb;22(2):93-95. doi: 10.1016/j.tics.2017.11.008. Epub 2017 Dec 15.
10
Team reasoning: Solving the puzzle of coordination.团队推理:解决协调难题。
Psychon Bull Rev. 2018 Oct;25(5):1770-1783. doi: 10.3758/s13423-017-1399-0.