Suppr超能文献

人工智能与(向患者)说明理由的必要性。

AI and the need for justification (to the patient).

作者信息

Muralidharan Anantharaman, Savulescu Julian, Schaefer G Owen

机构信息

Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.

Murdoch Children's Research Institute, Melbourne, VIC Australia.

出版信息

Ethics Inf Technol. 2024;26(1):16. doi: 10.1007/s10676-024-09754-w. Epub 2024 Mar 4.

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.

摘要

本文认为困扰黑箱人工智能的一个问题是它缺乏算法可证成性。我们认为医疗保健中共同决策的规范预先假定治疗决策应对患者具有可证成性。只有当医疗决策与患者的价值观和偏好相符且患者能够明白这一点时,医疗决策才对患者具有可证成性。黑箱人工智能威胁到以患者为导向的可证成性,因为决策缺乏理由说明使得患者难以确定该决策与患者价值观之间是否有足够的契合度。本文认为实现算法透明性无助于患者弥合其医疗决策与价值观之间的差距。我们引入一个名为“可证成人工智能”的假设模型来说明这一论点。可证成人工智能旨在以明确的方式对规范性和评价性考量进行建模,从而为患者和医生共同决定治疗方案提供一个跳板。如果我们的论点成立,那么在有可用的可证成模型时,我们应优先选择这些模型而非其他模型;如果没有,则旨在开发上述模型。

相似文献

1
AI and the need for justification (to the patient).人工智能与(向患者)说明理由的必要性。
Ethics Inf Technol. 2024;26(1):16. doi: 10.1007/s10676-024-09754-w. Epub 2024 Mar 4.
5
Can Medical Interventions Serve as 'Criminal Rehabilitation'?医学干预能否作为“罪犯改造手段”?
Neuroethics. 2019;12(1):85-96. doi: 10.1007/s12152-016-9264-9. Epub 2016 Jun 27.
9
Call for the responsible artificial intelligence in the healthcare.呼吁在医疗保健中使用负责任的人工智能。
BMJ Health Care Inform. 2023 Dec 21;30(1):e100920. doi: 10.1136/bmjhci-2023-100920.

本文引用的文献

3
Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.临床伦理决策算法:概念验证。
Am J Bioeth. 2022 Jul;22(7):4-20. doi: 10.1080/15265161.2022.2040647. Epub 2022 Mar 16.
9
Computer knows best? The need for value-flexibility in medical AI.计算机最懂?医疗 AI 需要价值灵活性。
J Med Ethics. 2019 Mar;45(3):156-160. doi: 10.1136/medethics-2018-105118. Epub 2018 Nov 22.
10
Autonomy: What's Shared Decision Making Have to Do With It?自主性:共同决策与之有何关系?
Am J Bioeth. 2018 Feb;18(2):W11-W12. doi: 10.1080/15265161.2017.1409844.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验