Räz Tim, Pahud De Mortanges Aurélie, Reyes Mauricio
Institute of Philosophy, University of Bern, Bern, Switzerland.
ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland.
Front Radiol. 2025 Aug 5;5:1627169. doi: 10.3389/fradi.2025.1627169. eCollection 2025.
Future AI systems may need to provide medical professionals with explanations of AI predictions and decisions. While current XAI methods match these requirements in principle, they are too inflexible and not sufficiently geared toward clinicians' needs to fulfill this role. This paper offers a conceptual roadmap for how XAI may be integrated into future medical practice. We identify three desiderata of increasing difficulty: First, explanations need to be provided in a context- and user-dependent manner. Second, explanations need to be created through a genuine dialogue between AI and human users. Third, AI systems need genuine social capabilities. We use an imaginary stroke treatment scenario as a foundation for our roadmap to explore how the three challenges emerge at different stages of clinical practice. We provide definitions of key concepts such as genuine dialogue and social capability, we discuss why these capabilities are desirable, and we identify major roadblocks. Our goal is to help practitioners and researchers in developing future XAI that is capable of operating as a participant in complex medical environments. We employ an interdisciplinary methodology that integrates medical XAI, medical practice, and philosophy.
未来的人工智能系统可能需要向医学专业人员提供有关人工智能预测和决策的解释。虽然当前的可解释人工智能(XAI)方法原则上符合这些要求,但它们过于僵化,且在满足临床医生需求以发挥这一作用方面做得还不够。本文提供了一个关于如何将XAI整合到未来医学实践中的概念路线图。我们确定了三个难度逐渐增加的需求:首先,解释需要以依赖于上下文和用户的方式提供。其次,解释需要通过人工智能与人类用户之间的真实对话来创建。第三,人工智能系统需要具备真正的社交能力。我们以一个虚构的中风治疗场景为基础构建我们的路线图,以探讨这三个挑战如何在临床实践的不同阶段出现。我们给出了诸如真实对话和社交能力等关键概念的定义,讨论了为什么这些能力是可取的,并确定了主要障碍。我们的目标是帮助从业者和研究人员开发未来的XAI,使其能够在复杂的医疗环境中作为参与者发挥作用。我们采用了一种跨学科方法,将医学XAI、医学实践和哲学整合在一起。