Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Hottingerstrasse 10, 8092, Zurich, Switzerland.
Charité Lab for Artificial Intelligence in Medicine-CLAIM, Charité - Universitätsmedizin Berlin, Berlin, Germany.
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
BACKGROUND: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. METHODS: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. RESULTS: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. CONCLUSIONS: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
背景:在人工智能(AI)在医疗保健中的应用方面,可解释性是讨论最多的话题之一。尽管 AI 驱动的系统在某些分析任务中已经表现出优于人类的性能,但缺乏可解释性仍然引发了批评。然而,可解释性不是一个纯粹的技术问题,它引发了一系列医疗、法律、伦理和社会问题,需要深入探讨。本文全面评估了可解释性在医疗 AI 中的作用,并对可解释性对将 AI 驱动工具采用到临床实践中的意义进行了伦理评估。
方法:以基于 AI 的临床决策支持系统为例,我们采用多学科方法从技术、法律、医疗和患者角度分析了可解释性对医疗 AI 的相关性。在对概念分析的结果进行分析的基础上,我们使用 Beauchamp 和 Childress 的《生物医学伦理原则》(自主性、善行、不伤害和正义)作为分析框架,对医疗 AI 中的可解释性需求进行了伦理评估。
结果:每个领域都突出了一组不同的核心考虑因素和价值观,这些因素和价值观对于理解可解释性在临床实践中的作用至关重要。从技术角度来看,可解释性必须从如何实现和从发展角度来看什么是有益的两个方面来考虑。从法律角度来看,我们确定了知情同意、认证和批准为医疗器械以及责任作为可解释性的核心接触点。医疗和患者的角度都强调了考虑人类行为者和医疗 AI 之间相互作用的重要性。我们得出的结论是,在临床决策支持系统中省略可解释性会对医学中的核心伦理价值观构成威胁,并可能对个人和公共健康产生不利影响。
结论:为了确保医疗 AI 能够实现其承诺,需要使开发人员、医疗保健专业人员和立法者意识到医疗 AI 中不透明算法的挑战和局限性,并促进今后的多学科合作。
BMC Med Inform Decis Mak. 2020-11-30
JMIR Med Educ. 2024-2-9
J Am Med Inform Assoc. 2022-3-15
BMC Med Ethics. 2024-10-1
Ann Acad Med Singap. 2023-12-28
Semin Nucl Med. 2021-3
Am Heart J Plus. 2025-8-8
Diagnostics (Basel). 2025-8-21
Bioengineering (Basel). 2025-8-6
Cancer. 2025-8-15
JMIR Med Inform. 2025-8-13
J Eval Clin Pract. 2025-8
Intensive Care Med Exp. 2019-12-10
Artif Intell Med. 2019-10-23
J Med Ethics. 2020-3
Nat Commun. 2019-3-11
Hastings Cent Rep. 2019-1