• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

直面人机协作的障碍:平衡机器行为中的效率与风险

Confronting barriers to human-robot cooperation: balancing efficiency and risk in machine behavior.

作者信息

Whiting Tim, Gautam Alvika, Tye Jacob, Simmons Michael, Henstrom Jordan, Oudah Mayada, Crandall Jacob W

机构信息

Brigham Young University, Provo, UT 84602, USA.

Oregon State University, Corvallis, OR 97331, USA.

出版信息

iScience. 2020 Dec 17;24(1):101963. doi: 10.1016/j.isci.2020.101963. eCollection 2021 Jan 22.

DOI:10.1016/j.isci.2020.101963
PMID:33458615
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7797565/
Abstract

Many technical and psychological challenges make it difficult to design machines that effectively cooperate with people. To better understand these challenges, we conducted a series of studies investigating human-human, robot-robot, and human-robot cooperation in a strategically rich resource-sharing scenario, which required players to balance efficiency, fairness, and risk. In these studies, both human-human and robot-robot dyads typically learned efficient and risky cooperative solutions when they could communicate. In the absence of communication, robot dyads still often learned the same efficient solution, but human dyads achieved a less efficient (less risky) form of cooperation. This difference in how people and machines treat risk appeared to discourage human-robot cooperation, as human-robot dyads frequently failed to cooperate without communication. These results indicate that machine behavior should better align with human behavior, promoting efficiency while simultaneously considering human tendencies toward risk and fairness.

摘要

诸多技术和心理挑战使得设计能与人类有效协作的机器变得困难。为了更好地理解这些挑战,我们开展了一系列研究,在一个策略丰富的资源共享场景中调查人与人、机器人与机器人以及人与机器人之间的协作,该场景要求参与者在效率、公平和风险之间进行权衡。在这些研究中,人与人以及机器人与机器人的二元组在能够交流时通常会学习高效且有风险的合作解决方案。在缺乏交流的情况下,机器人二元组仍常常会学习到相同的高效解决方案,但人类二元组实现的合作形式效率较低(风险较小)。人与机器在对待风险方式上的这种差异似乎阻碍了人机合作,因为人机二元组在没有交流时常常无法进行合作。这些结果表明,机器行为应更好地与人类行为保持一致,在提高效率的同时兼顾人类对风险和公平的倾向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/58c6d9fe8dd6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/562104808d6f/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/01aa1e33fb03/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/511e033fc46d/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/5cb73779ca8a/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/fc1a9e3663e5/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/58c6d9fe8dd6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/562104808d6f/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/01aa1e33fb03/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/511e033fc46d/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/5cb73779ca8a/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/fc1a9e3663e5/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/506a/7797565/58c6d9fe8dd6/gr5.jpg

相似文献

1
Confronting barriers to human-robot cooperation: balancing efficiency and risk in machine behavior.直面人机协作的障碍:平衡机器行为中的效率与风险
iScience. 2020 Dec 17;24(1):101963. doi: 10.1016/j.isci.2020.101963. eCollection 2021 Jan 22.
2
Learning to Cooperate via an Attention-Based Communication Neural Network in Decentralized Multi-Robot Exploration.在分散式多机器人探索中通过基于注意力的通信神经网络学习合作。
Entropy (Basel). 2019 Mar 19;21(3):294. doi: 10.3390/e21030294.
3
Investigating cooperation with robotic peers.研究与机器人同伴的合作。
PLoS One. 2019 Nov 20;14(11):e0225028. doi: 10.1371/journal.pone.0225028. eCollection 2019.
4
Cooperation in Human-Agent Systems to Support Resilience: A Microworld Experiment.人机系统中的合作以支持恢复力:一项微观世界实验。
Hum Factors. 2016 Sep;58(6):846-63. doi: 10.1177/0018720816649094. Epub 2016 May 13.
5
Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.机器人为何应具备社交属性:通过人机社交互动提升机器学习能力
PLoS One. 2015 Sep 30;10(9):e0138061. doi: 10.1371/journal.pone.0138061. eCollection 2015.
6
Assisting Operators in Heavy Industrial Tasks: On the Design of an Optimized Cooperative Impedance Fuzzy-Controller With Embedded Safety Rules.协助操作员执行重工业任务:关于一种嵌入安全规则的优化协作阻抗模糊控制器的设计
Front Robot AI. 2019 Aug 21;6:75. doi: 10.3389/frobt.2019.00075. eCollection 2019.
7
Trends in Haptic Communication of Human-Human Dyads: Toward Natural Human-Robot Co-manipulation.人与人二元组触觉交流的趋势:迈向自然的人机协同操作。
Front Neurorobot. 2021 Feb 17;15:626074. doi: 10.3389/fnbot.2021.626074. eCollection 2021.
8
Reinforcement of cooperation between profoundly retarded adults.重度智障成年人之间合作的强化。
Am J Ment Defic. 1975 Jul;80(1):63-71.
9
Localization and control of a rehabilitation mobile robot by close human-machine cooperation.通过紧密人机协作实现康复移动机器人的定位与控制
IEEE Trans Neural Syst Rehabil Eng. 2001 Jun;9(2):181-90. doi: 10.1109/7333.928578.
10
Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs.对人类和机器的社会偏好:关于机器收益作用的系统实验
Perspect Psychol Sci. 2025 Jan;20(1):165-181. doi: 10.1177/17456916231194949. Epub 2023 Sep 26.

引用本文的文献

1
Evidence of spillovers from (non)cooperative human-bot to human-human interactions.(非)合作性人机交互对人际交互产生溢出效应的证据。
iScience. 2025 Jun 25;28(8):113006. doi: 10.1016/j.isci.2025.113006. eCollection 2025 Aug 15.
2
Human injury-based safety decision of automated vehicles.基于人类伤害的自动驾驶汽车安全决策。
iScience. 2022 Jun 30;25(8):104703. doi: 10.1016/j.isci.2022.104703. eCollection 2022 Aug 19.
3
Algorithm exploitation: Humans are keen to exploit benevolent AI.算法利用:人类热衷于利用 benevolent AI。(原文中“benevolent AI”可能有误,推测应为“beneficial AI”之类更合理的表述,直译为“仁慈的人工智能”,这里按原文翻译)

本文引用的文献

1
Machine behaviour.机器行为。
Nature. 2019 Apr;568(7753):477-486. doi: 10.1038/s41586-019-1138-y. Epub 2019 Apr 24.
2
Cooperating with machines.与机器协作。
Nat Commun. 2018 Jan 16;9(1):233. doi: 10.1038/s41467-017-02597-8.
3
DeepStack: Expert-level artificial intelligence in heads-up no-limit poker.深筹码:单人无限注德州扑克中的专家级人工智能。
iScience. 2021 Jun 1;24(6):102679. doi: 10.1016/j.isci.2021.102679. eCollection 2021 Jun 25.
Science. 2017 May 5;356(6337):508-513. doi: 10.1126/science.aam6960. Epub 2017 Mar 2.
4
Mastering the game of Go with deep neural networks and tree search.用深度神经网络和树搜索掌握围棋游戏。
Nature. 2016 Jan 28;529(7587):484-9. doi: 10.1038/nature16961.
5
Computer science. Heads-up limit hold'em poker is solved.计算机科学。 顶对限制加注德州扑克已被破解。
Science. 2015 Jan 9;347(6218):145-9. doi: 10.1126/science.1259433.
6
How do we think machines think? An fMRI study of alleged competition with an artificial intelligence.我们如何看待机器思考?一项关于与人工智能所谓竞争的功能磁共振成像研究。
Front Hum Neurosci. 2012 May 8;6:103. doi: 10.3389/fnhum.2012.00103. eCollection 2012.
7
Checkers is solved.跳棋已被破解。
Science. 2007 Sep 14;317(5844):1518-22. doi: 10.1126/science.1144079. Epub 2007 Jul 19.
8
Multiagent reinforcement learning in the Iterated Prisoner's Dilemma.重复囚徒困境中的多智能体强化学习
Biosystems. 1996;37(1-2):147-66. doi: 10.1016/0303-2647(95)01551-5.
9
A prisoner's dilemma experiment on cooperation with people and human-like computers.一项关于与人和类人计算机合作的囚徒困境实验。
J Pers Soc Psychol. 1996 Jan;70(1):47-65. doi: 10.1037//0022-3514.70.1.47.