Pierce Robin L, Van Biesen Wim, Van Cauwenberge Daan, Decruyenaere Johan, Sterckx Sigrid
The Law School, University of Exeter, Exeter, United Kingdom.
Head of Department of Nephrology and Centre for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium.
Front Genet. 2022 Sep 19;13:903600. doi: 10.3389/fgene.2022.903600. eCollection 2022.
The combination of "Big Data" and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This "opacity" problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an "epistemic warrant" for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find-that accuracy is sufficient justification for intervention for today's patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients.
“大数据”与人工智能(AI)的结合常常被宣传为应用于医疗决策时有可能带来宝贵的健康益处。然而,负责任地采用基于人工智能的临床决策支持系统在个人和社会层面都面临若干挑战。引发特别关注的一个特征是可解释性问题,因为如果医生不知道(或无法知道)算法得出特定输出的方式,这可能会导致多重挑战,包括无法评估输出的优点。这种“不透明”问题引发了关于医生依赖算法输出是否合理的疑问,一些学者坚持可解释性的核心地位,而另一些人则认为没有理由对人工智能提出对医生没有要求的东西。我们认为这两种观点都有道理,但发现需要更细致入微地阐释可解释性在临床实践中的潜在功能,进而阐明其在临床应用人工智能背景下的相关性。在本文中,我们通过审视临床医学中可解释性的要求来探讨可解释性,并区分针对当前患者与未来患者的可解释性功能。这种区分对短期和长期的可解释性要求都有影响。我们强调透明度在可解释性中的作用,并将语义透明度确定为可解释性问题本身的基础。我们认为,在日常临床实践中,准确性作为临床决策的“认知依据”就足够了,而且从科学或因果解释意义上要求可解释性的最有说服力的理由是,通过构建更强大的世界模型来改善未来医疗护理的可能性。我们将临床决策的目标确定为尽可能经常地实现最佳可能结果,并发现只要揭示科学解释的努力继续改善未来患者的医疗护理,准确性就足以成为对当今患者进行干预的正当理由。