Wabro Andreas, Herrmann Markus, Winkler Eva C
National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany.
J Med Ethics. 2025 Jul 23;51(8):516-520. doi: 10.1136/jme-2024-110046.
The objective of explainable artificial intelligence systems designed for clinical decision support (XAI-CDSS) is to enhance physicians' diagnostic performance, confidence and trust through the implementation of interpretable methods, thus providing for a superior epistemic positioning, a robust foundation for critical reflection and trustworthiness in times of heightened technological dependence. However, recent studies have revealed shortcomings in achieving these goals, questioning the widespread endorsement of XAI by medical professionals, ethicists and policy-makers alike. Based on a surgical use case, this article challenges generalising calls for XAI-CDSS and emphasises the significance of time-sensitive clinical environments which frequently preclude adequate consideration of system explanations. Therefore, XAI-CDSS may not be able to meet expectations of augmenting clinical decision-making in specific circumstances where time is of the essence. This article, by employing a principled ethical balancing methodology, highlights several fallacies associated with XAI deployment in time-sensitive clinical situations and recommends XAI endorsement only where scientific evidence or stakeholder assessments do not contradict such deployment in specific target settings.
为临床决策支持设计的可解释人工智能系统(XAI-CDSS)的目标是,通过实施可解释方法来提高医生的诊断性能、信心和信任度,从而在技术依赖程度不断提高的时代,提供更高的认知定位、批判性反思的坚实基础和可信度。然而,最近的研究揭示了在实现这些目标方面存在的不足,这使得医学专业人员、伦理学家和政策制定者对XAI的广泛认可受到质疑。基于一个外科用例,本文对普遍呼吁使用XAI-CDSS提出质疑,并强调了时间敏感型临床环境的重要性,这种环境常常使人们无法充分考虑系统解释。因此,在时间至关重要的特定情况下,XAI-CDSS可能无法满足增强临床决策的期望。本文采用一种有原则的伦理平衡方法,突出了在时间敏感型临床情况下与XAI部署相关的几个谬误,并建议仅在科学证据或利益相关者评估不与特定目标环境中的此类部署相矛盾时,才认可XAI。