Wang Alan Q, Karaman Batuhan K, Kim Heejong, Rosenthal Jacob, Saluja Rachit, Young Sean I, Sabuncu Mert R
School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA.
Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA.
IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.
医学成像中的机器学习模型可解释性(MLMI)是一个重要的研究方向。然而,对于可解释性的含义,人们普遍感到模糊不清。为什么MLMI中会出现对可解释性的需求?当需要可解释性时,人们实际上试图解决什么目标?为了回答这些问题,我们确定需要将MLMI中可解释性的目标和要素形式化。通过对医学图像分析及其与机器学习交叉领域中常见的现实世界任务和目标进行推理,我们确定了可解释性的五个核心要素:定位、视觉可识别性、物理归因、模型透明度和可操作性。由此,我们得出了一个MLMI可解释性框架,它可作为在此背景下处理可解释性的分步指南。总体而言,本文在医学成像背景下将可解释性需求形式化,我们的应用视角阐明了具体的MLMI特定目标和考虑因素,以指导方法设计并改善实际应用。我们的目标是为模型设计者和从业者提供实用且有指导意义的信息,激励医学成像领域的模型开发者更深入地思考可解释性正在实现的目标,并提出可解释性研究的未来方向。