Suppr超能文献

人工智能在医疗保健中的应用:责任与安全。

Artificial intelligence in health care: accountability and safety.

机构信息

Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, England.

Bradford Teaching Hospitals NHS Foundation Trust, Bradford, England.

出版信息

Bull World Health Organ. 2020 Apr 1;98(4):251-256. doi: 10.2471/BLT.19.237487. Epub 2020 Feb 25.

Abstract

The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.

摘要

由基于人工智能的临床工具做出的决策可能会对患者造成伤害,而目前全球的责任和安全实践尚未对此做出调整。我们主要关注两种用于决策的临床人工智能:对患者伤害的道德责任;以及保护患者免受此类伤害的安全保障。基于人工智能的工具正在挑战传统的临床实践,即分配责任和确保安全。人类临床医生和安全工程师对人工智能系统做出的决策的控制能力较弱,对人工智能系统如何做出决策的了解和理解也较少。我们通过将其应用于为脓毒症治疗开发的人工智能系统的一个示例来说明这种分析。本文最后提出了减轻这些担忧的实际建议。我们主张在对人工智能系统导致的患者伤害的道德责任进行评估时,应包括人工智能开发者和系统安全工程师。同时,模型中的任何行为者都无法完全满足人工智能系统决策的传统道德责任条件。因此,我们应该在这种情况下更新我们的道德责任观念。我们还需要从静态模型转向动态模型的保证,接受在人工智能系统部署之前,在设计阶段无法完全解决安全问题的考虑。

相似文献

1
Artificial intelligence in health care: accountability and safety.人工智能在医疗保健中的应用:责任与安全。
Bull World Health Organ. 2020 Apr 1;98(4):251-256. doi: 10.2471/BLT.19.237487. Epub 2020 Feb 25.
3
How to achieve trustworthy artificial intelligence for health.如何实现可信的医疗人工智能。
Bull World Health Organ. 2020 Apr 1;98(4):257-262. doi: 10.2471/BLT.19.237289. Epub 2020 Jan 27.

引用本文的文献

4
6
[Applications, challenges and a trustworthy use of artificial intelligence in public health].[人工智能在公共卫生中的应用、挑战及可靠使用]
Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2025 Aug;68(8):880-888. doi: 10.1007/s00103-025-04098-2. Epub 2025 Jul 2.
10
Artificial intelligence: Role in healthcare.人工智能:在医疗保健中的作用。
J Postgrad Med. 2025 Apr 1;71(2):57-60. doi: 10.4103/jpgm.jpgm_95_25. Epub 2025 Jun 9.

本文引用的文献

3
Artificial intelligence, bias and clinical safety.人工智能、偏差与临床安全。
BMJ Qual Saf. 2019 Mar;28(3):231-237. doi: 10.1136/bmjqs-2018-008370. Epub 2019 Jan 12.
8
Fluid resuscitation in human sepsis: Time to rewrite history?人类脓毒症中的液体复苏:是时候改写历史了?
Ann Intensive Care. 2017 Dec;7(1):4. doi: 10.1186/s13613-016-0231-8. Epub 2017 Jan 3.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验