Hildt Elisabeth
Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, IL 60616, USA.
Bioengineering (Basel). 2025 Apr 2;12(4):375. doi: 10.3390/bioengineering12040375.
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and the role of explainability in CDSSs, four use cases of AI-based CDSSs will be presented. The examples were chosen to highlight different types of AI-based CDSSs as well as different types of explanations: a machine language (ML) tool that lacks explainability; an approach with post hoc explanations; a hybrid model that provides medical knowledge-based explanations; and a causal model that involves complex moral concepts. Then, the role, relevance, and implications of explainability in the context of the use cases will be discussed, focusing on seven explainability-related aspects and themes. These are: (1) The addressees of explainability in medical AI; (2) the relevance of explainability for medical decision making; (3) the type of explanation provided; (4) the (often-cited) conflict between explainability and accuracy; (5) epistemic authority and automation bias; (6) Individual preferences and values; (7) patient autonomy and doctor-patient relationships. The case-based discussion reveals that the role and relevance of explainability in AI-based CDSSs varies considerably depending on the tool and use context. While it is plausible to assume that explainability in medical AI has positive implications, empirical data on explainability and explainability-related implications is scarce. Use-case-based studies are needed to investigate not only the technical aspects of explainability but also the perspectives of clinicians and patients on the relevance of explainability and its implications.
本文探讨了医学人工智能(AI)应用背景下的可解释性,重点关注基于AI的临床决策支持系统(CDSS)。在介绍了AI中的可解释性概念,并简要概述了基于AI的临床决策支持系统(CDSS)以及可解释性在CDSS中的作用之后,将介绍基于AI的CDSS的四个用例。选择这些例子是为了突出不同类型的基于AI的CDSS以及不同类型的解释:一种缺乏可解释性的机器学习(ML)工具;一种具有事后解释的方法;一种提供基于医学知识解释的混合模型;以及一种涉及复杂道德概念的因果模型。然后,将讨论可解释性在这些用例背景下的作用、相关性和影响,重点关注七个与可解释性相关的方面和主题。这些方面包括:(1)医学AI中可解释性的受众;(2)可解释性对医学决策的相关性;(3)提供的解释类型;(4)(经常被提及的)可解释性与准确性之间的冲突;(5)认知权威和自动化偏差;(6)个人偏好和价值观;(7)患者自主权和医患关系。基于案例的讨论表明,可解释性在基于AI的CDSS中的作用和相关性因工具和使用背景的不同而有很大差异。虽然可以合理地假设医学AI中的可解释性具有积极影响,但关于可解释性及其相关影响的实证数据却很少。需要进行基于用例的研究,不仅要调查可解释性的技术方面,还要调查临床医生和患者对可解释性的相关性及其影响的看法。