将可解释人工智能置于背景之中:医学人工智能的机构性解释

Putting explainable AI in context: institutional explanations for medical AI.

作者信息

Theunissen Mark, Browning Jacob

机构信息

Department of Values, Technology and Innovation, School of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands.

New York University, New York, USA.

出版信息

Ethics Inf Technol. 2022;24(2):23. doi: 10.1007/s10676-022-09649-8. Epub 2022 May 6.

Abstract

There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations-and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice-that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.

摘要

当前存在一场关于医疗领域中使用的机器学习系统是否需要以及在何种意义上需要具备可解释性的争论。支持可解释性的一方认为,这些系统需要对每个个体决策进行事后解释,以增强信任并确保准确诊断。反对的一方则表示,系统的高准确性和可靠性足以提供认知上合理的信念,无需对每个个体决策进行解释。但是,正如我们所表明的,这两种解决方案都有局限性,而且不清楚它们是否解决了使用这些系统的医学专业人员的认知担忧。我们认为这些系统确实需要一种解释,但这是一种特定类型的解释。这类解释提供了医学专业人员在实践中为何应依赖该系统的原因,也就是说,它们侧重于试图解决在特定背景和特定场合下使用该系统的人员的认知问题。但是,要确保这些制度性解释符合目的,就意味着要确保设计和部署这些系统的机构对系统中所蕴含的假设保持透明。这需要与专家和最终用户就系统在该领域的运行方式、用于评估其准确性的指标以及审核系统以防止偏差和故障未得到解决的程序进行协调。我们认为,这种更广泛的解释对于事后解释或准确性得分在认知上对医学专业人员有意义而言是必要的,使他们能够在实践中依赖这些系统作为有效且有用的工具。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索