• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人类和机器的社会偏好:关于机器收益作用的系统实验

Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs.

作者信息

von Schenk Alicia, Klockmann Victor, Köbis Nils

机构信息

Center for Humans and Machines, Max Planck Institute for Human Development.

Department of Economics, University of Würzburg.

出版信息

Perspect Psychol Sci. 2025 Jan;20(1):165-181. doi: 10.1177/17456916231194949. Epub 2023 Sep 26.

DOI:10.1177/17456916231194949
PMID:37751604
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11720266/
Abstract

There is growing interest in the field of cooperative artificial intelligence (AI), that is, settings in which humans and machines cooperate. By now, more than 160 studies from various disciplines have reported on how people cooperate with machines in behavioral experiments. Our systematic review of the experimental instructions reveals that the implementation of the machine payoffs and the information participants receive about them differ drastically across these studies. In an online experiment ( = 1,198), we compare how these different payoff implementations shape people's revealed social preferences toward machines. When matched with machine partners, people reveal substantially stronger social preferences and reciprocity when they know that a human beneficiary receives the machine payoffs than when they know that no such "human behind the machine" exists. When participants are not informed about machine payoffs, we found weak social preferences toward machines. Comparing survey answers with those from a follow-up study ( = 150), we conclude that people form their beliefs about machine payoffs in a self-serving way. Thus, our results suggest that the extent to which humans cooperate with machines depends on the implementation and information about the machine's earnings.

摘要

合作人工智能(AI)领域正引发越来越多的关注,也就是人类与机器进行合作的场景。到目前为止,来自各个学科的160多项研究报告了人们在行为实验中如何与机器合作。我们对实验指导的系统综述表明,在这些研究中,机器收益的实施方式以及参与者所获得的关于这些收益的信息存在巨大差异。在一项在线实验(n = 1198)中,我们比较了这些不同的收益实施方式如何塑造人们对机器所显示出的社会偏好。当与机器伙伴配对时,与知道不存在“机器背后的人”相比,当人们知道人类受益者会获得机器收益时,他们会表现出更强的社会偏好和互惠行为。当参与者未被告知机器收益时,我们发现他们对机器的社会偏好较弱。将调查答案与后续研究(n = 150)的答案进行比较后,我们得出结论,人们以利己的方式形成对机器收益的看法。因此,我们的结果表明,人类与机器合作的程度取决于机器收益的实施方式和相关信息。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/ae72717a085f/10.1177_17456916231194949-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/7f57970de673/10.1177_17456916231194949-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/c25b91078156/10.1177_17456916231194949-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/8e2894fae91d/10.1177_17456916231194949-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/de099443373d/10.1177_17456916231194949-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/3dbebbb44a17/10.1177_17456916231194949-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/68064405b360/10.1177_17456916231194949-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/ae72717a085f/10.1177_17456916231194949-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/7f57970de673/10.1177_17456916231194949-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/c25b91078156/10.1177_17456916231194949-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/8e2894fae91d/10.1177_17456916231194949-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/de099443373d/10.1177_17456916231194949-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/3dbebbb44a17/10.1177_17456916231194949-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/68064405b360/10.1177_17456916231194949-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1944/11720266/ae72717a085f/10.1177_17456916231194949-fig7.jpg

相似文献

1
Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs.对人类和机器的社会偏好:关于机器收益作用的系统实验
Perspect Psychol Sci. 2025 Jan;20(1):165-181. doi: 10.1177/17456916231194949. Epub 2023 Sep 26.
2
In human-machine trust, humans rely on a simple averaging strategy.在人机信任中,人类依赖于一种简单的平均策略。
Cogn Res Princ Implic. 2024 Sep 2;9(1):58. doi: 10.1186/s41235-024-00583-5.
3
Monetary payoffs modulate reciprocity expectations in outcome evaluations: An event-related potential study.货币报酬调节结果评价中的互惠期望:一项事件相关电位研究。
Eur J Neurosci. 2021 Feb;53(3):902-915. doi: 10.1111/ejn.15100. Epub 2021 Jan 11.
4
Algorithm exploitation: Humans are keen to exploit benevolent AI.算法利用:人类热衷于利用 benevolent AI。(原文中“benevolent AI”可能有误,推测应为“beneficial AI”之类更合理的表述,直译为“仁慈的人工智能”,这里按原文翻译)
iScience. 2021 Jun 1;24(6):102679. doi: 10.1016/j.isci.2021.102679. eCollection 2021 Jun 25.
5
Trust within human-machine collectives depends on the perceived consensus about cooperative norms.人机集体中的信任取决于对合作规范的共识感知。
Nat Commun. 2023 May 30;14(1):3108. doi: 10.1038/s41467-023-38592-5.
6
Cooperation with autonomous machines through culture and emotion.通过文化和情感与自主机器合作。
PLoS One. 2019 Nov 11;14(11):e0224758. doi: 10.1371/journal.pone.0224758. eCollection 2019.
7
Stingy bots can improve human welfare in experimental sharing networks.吝啬的机器人可以提高实验共享网络中的人类福利。
Sci Rep. 2023 Oct 20;13(1):17957. doi: 10.1038/s41598-023-44883-0.
8
Evolution of reciprocity with limited payoff memory.回报有限记忆下的互惠行为演变。
Proc Biol Sci. 2024 Jun;291(2025):20232493. doi: 10.1098/rspb.2023.2493. Epub 2024 Jun 19.
9
Cooperating with machines.与机器协作。
Nat Commun. 2018 Jan 16;9(1):233. doi: 10.1038/s41467-017-02597-8.
10
Preferences for Artificial Intelligence Clinicians Before and During the COVID-19 Pandemic: Discrete Choice Experiment and Propensity Score Matching Study.人工智能临床医生在 COVID-19 大流行前后的偏好:离散选择实验和倾向评分匹配研究。
J Med Internet Res. 2021 Mar 2;23(3):e26997. doi: 10.2196/26997.

引用本文的文献

1
Evidence of spillovers from (non)cooperative human-bot to human-human interactions.(非)合作性人机交互对人际交互产生溢出效应的证据。
iScience. 2025 Jun 25;28(8):113006. doi: 10.1016/j.isci.2025.113006. eCollection 2025 Aug 15.
2
Rewards and punishments help humans overcome biases against cooperation partners assumed to be machines.奖励和惩罚有助于人类克服对被假定为机器的合作伙伴的偏见。
iScience. 2025 Jun 6;28(7):112833. doi: 10.1016/j.isci.2025.112833. eCollection 2025 Jul 18.
3
Reputation-based reciprocity in human-bot and human-human networks.

本文引用的文献

1
Humans perceive warmth and competence in artificial intelligence.人类在人工智能中感知到温暖和能力。
iScience. 2023 Jul 4;26(8):107256. doi: 10.1016/j.isci.2023.107256. eCollection 2023 Aug 18.
2
Trust within human-machine collectives depends on the perceived consensus about cooperative norms.人机集体中的信任取决于对合作规范的共识感知。
Nat Commun. 2023 May 30;14(1):3108. doi: 10.1038/s41467-023-38592-5.
3
What ChatGPT and generative AI mean for science.ChatGPT和生成式人工智能对科学意味着什么。
人机网络和人际网络中基于声誉的互惠行为。
PNAS Nexus. 2025 May 9;4(5):pgaf150. doi: 10.1093/pnasnexus/pgaf150. eCollection 2025 May.
4
Adverse reactions to the use of large language models in social interactions.在社交互动中使用大语言模型的不良反应。
PNAS Nexus. 2025 Apr 7;4(4):pgaf112. doi: 10.1093/pnasnexus/pgaf112. eCollection 2025 Apr.
5
Human cooperation with artificial agents varies across countries.人类与人工智能体的合作在不同国家存在差异。
Sci Rep. 2025 Mar 22;15(1):10000. doi: 10.1038/s41598-025-92977-8.
6
GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk.GPT-3.5 的利他主义建议对互惠性关注敏感,但对策略性风险不敏感。
Sci Rep. 2024 Sep 27;14(1):22274. doi: 10.1038/s41598-024-73306-x.
7
The impact of generative artificial intelligence on socioeconomic inequalities and policy making.生成式人工智能对社会经济不平等和政策制定的影响。
PNAS Nexus. 2024 Jun 11;3(6):pgae191. doi: 10.1093/pnasnexus/pgae191. eCollection 2024 Jun.
8
The effects of social presence on cooperative trust with algorithms.社会存在对与算法合作信任的影响。
Sci Rep. 2023 Oct 14;13(1):17463. doi: 10.1038/s41598-023-44354-6.
Nature. 2023 Feb;614(7947):214-216. doi: 10.1038/d41586-023-00340-6.
4
The power to harm: AI assistants pave the way to unethical behavior.伤害的力量:人工智能助手为不道德行为铺平道路。
Curr Opin Psychol. 2022 Oct;47:101382. doi: 10.1016/j.copsyc.2022.101382. Epub 2022 Jun 11.
5
Social impact and governance of AI and neurotechnologies.人工智能和神经技术的社会影响和治理。
Neural Netw. 2022 Aug;152:542-554. doi: 10.1016/j.neunet.2022.05.012. Epub 2022 May 21.
6
Prosocial behavior toward machines.对机器的亲社会行为。
Curr Opin Psychol. 2022 Feb;43:260-265. doi: 10.1016/j.copsyc.2021.08.004. Epub 2021 Aug 12.
7
Understanding, explaining, and utilizing medical artificial intelligence.理解、解释和利用医学人工智能。
Nat Hum Behav. 2021 Dec;5(12):1636-1642. doi: 10.1038/s41562-021-01146-0. Epub 2021 Jun 28.
8
Bad machines corrupt good morals.坏机器会腐蚀良好的道德。
Nat Hum Behav. 2021 Jun;5(6):679-685. doi: 10.1038/s41562-021-01128-2. Epub 2021 Jun 3.
9
Cooperative AI: machines must learn to find common ground.协作式人工智能:机器必须学会找到共同点。
Nature. 2021 May;593(7857):33-36. doi: 10.1038/d41586-021-01170-0.
10
Mastering Atari, Go, chess and shogi by planning with a learned model.通过使用学习模型进行规划,掌握 Atari、围棋、国际象棋和将棋。
Nature. 2020 Dec;588(7839):604-609. doi: 10.1038/s41586-020-03051-4. Epub 2020 Dec 23.