• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类-智能体团队中的道德决策:人类控制与解释的作用

Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations.

作者信息

van der Waa Jasper, Verdult Sabine, van den Bosch Karel, van Diggelen Jurriaan, Haije Tjalling, van der Stigchel Birgit, Cocu Ioana

机构信息

Perceptual and Cognitive Systems, TNO, Soesterberg, Netherlands.

Interactive Intelligence, Technical University Delft, Delft, Netherlands.

出版信息

Front Robot AI. 2021 May 27;8:640647. doi: 10.3389/frobt.2021.640647. eCollection 2021.

DOI:10.3389/frobt.2021.640647
PMID:34124173
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8190710/
Abstract

With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent's part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human's understanding in the agent's reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team's moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent's behavior and for the team's decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.

摘要

随着人工智能的发展,智能代理越来越多地被部署到需要遵循道德准则和道德价值观的任务中。由于智能代理没有法律地位,如果行为不符合规定,人类应承担责任,这意味着人类需要进行控制。这通常被称为人类控制(MHC)。在本文中,实现人类控制被作为一个设计问题来探讨,定义了人类与代理之间的协作。我们提出了三种可能的团队设计(团队设计模式),代理的自主程度各不相同。团队设计包括代理给出的解释,以阐明其推理和决策过程。这些设计在一个医疗分诊任务的模拟中得以实现,由一名领域专家和一个人工代理执行。分诊任务模拟在时间压力下做出决策,资源太少以至于无法始终遵循所有医疗准则,因此涉及道德选择。领域专家(即医疗保健专业人员)参与了本研究。一个目标是评估模拟的生态相关性。其次,探索人类对代理的控制,以确保在每个提议的团队设计中行为符合道德规范。第三,评估代理解释在人类对代理推理理解方面的作用。结果表明,专家总体上认为该任务是对现实中可能发生情况的可信模拟。当后果很快显现时,领域专家能够对团队的道德合规进行控制。相反,当后果出现得晚得多时,专家们感觉控制较少且责任感较低。可能是由于任务中设定的时间压力或对代理的过度信任,专家们在任务过程中不太使用解释;然而在任务结束后被问及此事时,他们认为这些解释是有用的。结论是,团队设计应强调并支持人类培养对代理行为和团队决策的责任感。设计应包括与分配的团队角色以及人类认知状态相匹配的解释。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/fe4587de8d4d/frobt-08-640647-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/05459611f2e1/frobt-08-640647-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/22ecd41a5cca/frobt-08-640647-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/fe4587de8d4d/frobt-08-640647-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/05459611f2e1/frobt-08-640647-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/22ecd41a5cca/frobt-08-640647-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6088/8190710/fe4587de8d4d/frobt-08-640647-g003.jpg

相似文献

1
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations.人类-智能体团队中的道德决策:人类控制与解释的作用
Front Robot AI. 2021 May 27;8:640647. doi: 10.3389/frobt.2021.640647. eCollection 2021.
2
A conceptual and computational model of moral decision making in human and artificial agents.人类和人工智能主体道德决策的概念与计算模型。
Top Cogn Sci. 2010 Jul;2(3):454-85. doi: 10.1111/j.1756-8765.2010.01095.x. Epub 2010 May 13.
3
[The origin of informed consent].[知情同意的起源]
Acta Otorhinolaryngol Ital. 2005 Oct;25(5):312-27.
4
Using fNIRS to Identify Transparency- and Reliability-Sensitive Markers of Trust Across Multiple Timescales in Collaborative Human-Human-Agent Triads.使用功能近红外光谱技术(fNIRS)在人类-人类-智能体协作三元组中跨多个时间尺度识别信任的透明度和可靠性敏感标记。
Front Neuroergon. 2022 Apr 7;3:838625. doi: 10.3389/fnrgo.2022.838625. eCollection 2022.
5
Heterogeneous human-robot task allocation based on artificial trust.基于人工信任的异构人机任务分配。
Sci Rep. 2022 Sep 12;12(1):15304. doi: 10.1038/s41598-022-19140-5.
6
Teammates Instead of Tools: The Impacts of Level of Autonomy on Mission Performance and Human-Agent Teaming Dynamics in Multi-Agent Distributed Teams.队友而非工具:自主性水平对多智能体分布式团队任务绩效及人机协作动态的影响
Front Robot AI. 2022 May 20;9:782134. doi: 10.3389/frobt.2022.782134. eCollection 2022.
7
When Do Humans Heed AI Agents' Advice? When Should They?当人类听从人工智能代理的建议时,他们应该何时听从?
Hum Factors. 2024 Jul;66(7):1914-1927. doi: 10.1177/00187208231190459. Epub 2023 Aug 8.
8
Intelligent decision support in medical triage: are people robust to biased advice?医疗分诊中的智能决策支持:人们对有偏差的建议是否具有稳健性?
J Public Health (Oxf). 2023 Aug 28;45(3):689-696. doi: 10.1093/pubmed/fdad005.
9
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI.通过 IKE-XAI 解释人工智能中的顿悟时刻:可解释人工智能的隐式知识提取。
Neural Netw. 2022 Nov;155:95-118. doi: 10.1016/j.neunet.2022.08.002. Epub 2022 Aug 6.
10
The influence of a bystander agent's beliefs on children's and adults' decision-making process.旁观者的信念对儿童和成人决策过程的影响。
J Exp Child Psychol. 2017 Jan;153:126-139. doi: 10.1016/j.jecp.2016.09.006. Epub 2016 Oct 12.

引用本文的文献

1
Meaningful human control and variable autonomy in human-robot teams for firefighting.人机灭火团队中的有意义的人类控制与可变自主性。
Front Robot AI. 2024 Feb 1;11:1323980. doi: 10.3389/frobt.2024.1323980. eCollection 2024.
2
Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular Imaging.人工智能在心血管成像中的真实世界与监管视角
Front Cardiovasc Med. 2022 Jul 22;9:890809. doi: 10.3389/fcvm.2022.890809. eCollection 2022.

本文引用的文献

1
Meaningful Human Control over Autonomous Systems: A Philosophical Account.人类对自主系统的有效控制:一种哲学阐释。
Front Robot AI. 2018 Feb 28;5:15. doi: 10.3389/frobt.2018.00015. eCollection 2018.
2
Decision Explanation and Feature Importance for Invertible Networks.可逆网络的决策解释与特征重要性
IEEE Int Conf Comput Vis Workshops. 2019 Oct;2019:4235-4239. doi: 10.1109/iccvw.2019.00521. Epub 2020 Mar 5.
3
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.
衡量解释的质量:系统可归因性量表(SCS):比较人类和机器的解释
Kunstliche Intell (Oldenbourg). 2020;34(2):193-198. doi: 10.1007/s13218-020-00636-z. Epub 2020 Jan 21.
4
Virtually Perfect? Telemedicine for Covid-19.近乎完美?用于新冠疫情的远程医疗
N Engl J Med. 2020 Apr 30;382(18):1679-1681. doi: 10.1056/NEJMp2003539. Epub 2020 Mar 11.
5
Critiquing the Reasons for Making Artificial Moral Agents.批判制造人工道德代理的原因。
Sci Eng Ethics. 2019 Jun;25(3):719-735. doi: 10.1007/s11948-018-0030-8. Epub 2018 Feb 19.
6
Robotics: Ethics of artificial intelligence.机器人技术:人工智能伦理
Nature. 2015 May 28;521(7553):415-8. doi: 10.1038/521415a.
7
Classification with correlated features: unreliability of feature ranking and solutions.分类与相关特征:特征排序和解决方案的不可靠性。
Bioinformatics. 2011 Jul 15;27(14):1986-94. doi: 10.1093/bioinformatics/btr300. Epub 2011 May 16.
8
Conditional variable importance for random forests.随机森林的条件变量重要性
BMC Bioinformatics. 2008 Jul 11;9:307. doi: 10.1186/1471-2105-9-307.
9
Bias in random forest variable importance measures: illustrations, sources and a solution.随机森林变量重要性度量中的偏差:示例、来源及解决方案
BMC Bioinformatics. 2007 Jan 25;8:25. doi: 10.1186/1471-2105-8-25.
10
A Bayesian model for triage decision support.一种用于分诊决策支持的贝叶斯模型。
Int J Med Inform. 2006 May;75(5):403-11. doi: 10.1016/j.ijmedinf.2005.07.028. Epub 2005 Sep 2.