Suppr超能文献

医学成像机器学习中的可解释性框架。

A Framework for Interpretability in Machine Learning for Medical Imaging.

作者信息

Wang Alan Q, Karaman Batuhan K, Kim Heejong, Rosenthal Jacob, Saluja Rachit, Young Sean I, Sabuncu Mert R

机构信息

School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA.

Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA.

出版信息

IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.

Abstract

Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

摘要

医学成像中的机器学习模型可解释性(MLMI)是一个重要的研究方向。然而,对于可解释性的含义,人们普遍感到模糊不清。为什么MLMI中会出现对可解释性的需求?当需要可解释性时,人们实际上试图解决什么目标?为了回答这些问题,我们确定需要将MLMI中可解释性的目标和要素形式化。通过对医学图像分析及其与机器学习交叉领域中常见的现实世界任务和目标进行推理,我们确定了可解释性的五个核心要素:定位、视觉可识别性、物理归因、模型透明度和可操作性。由此,我们得出了一个MLMI可解释性框架,它可作为在此背景下处理可解释性的分步指南。总体而言,本文在医学成像背景下将可解释性需求形式化,我们的应用视角阐明了具体的MLMI特定目标和考虑因素,以指导方法设计并改善实际应用。我们的目标是为模型设计者和从业者提供实用且有指导意义的信息,激励医学成像领域的模型开发者更深入地思考可解释性正在实现的目标,并提出可解释性研究的未来方向。

相似文献

1
A Framework for Interpretability in Machine Learning for Medical Imaging.医学成像机器学习中的可解释性框架。
IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.
2
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
3
Definitions, methods, and applications in interpretable machine learning.可解释机器学习中的定义、方法和应用。
Proc Natl Acad Sci U S A. 2019 Oct 29;116(44):22071-22080. doi: 10.1073/pnas.1900654116. Epub 2019 Oct 16.

本文引用的文献

10
Explainable multiple abnormality classification of chest CT volumes.胸部CT容积数据的可解释性多异常分类
Artif Intell Med. 2022 Oct;132:102372. doi: 10.1016/j.artmed.2022.102372. Epub 2022 Aug 12.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验