Suppr超能文献

医学成像机器学习中的可解释性框架。

A Framework for Interpretability in Machine Learning for Medical Imaging.

作者信息

Wang Alan Q, Karaman Batuhan K, Kim Heejong, Rosenthal Jacob, Saluja Rachit, Young Sean I, Sabuncu Mert R

机构信息

School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA.

Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA.

出版信息

IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.

Abstract

Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

摘要

医学成像中的机器学习模型可解释性(MLMI)是一个重要的研究方向。然而,对于可解释性的含义,人们普遍感到模糊不清。为什么MLMI中会出现对可解释性的需求?当需要可解释性时,人们实际上试图解决什么目标?为了回答这些问题,我们确定需要将MLMI中可解释性的目标和要素形式化。通过对医学图像分析及其与机器学习交叉领域中常见的现实世界任务和目标进行推理,我们确定了可解释性的五个核心要素:定位、视觉可识别性、物理归因、模型透明度和可操作性。由此,我们得出了一个MLMI可解释性框架,它可作为在此背景下处理可解释性的分步指南。总体而言,本文在医学成像背景下将可解释性需求形式化,我们的应用视角阐明了具体的MLMI特定目标和考虑因素,以指导方法设计并改善实际应用。我们的目标是为模型设计者和从业者提供实用且有指导意义的信息,激励医学成像领域的模型开发者更深入地思考可解释性正在实现的目标,并提出可解释性研究的未来方向。

相似文献

1
A Framework for Interpretability in Machine Learning for Medical Imaging.
IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.
2
The future of Cochrane Neonatal.
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
3
Definitions, methods, and applications in interpretable machine learning.
Proc Natl Acad Sci U S A. 2019 Oct 29;116(44):22071-22080. doi: 10.1073/pnas.1900654116. Epub 2019 Oct 16.
4
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.
Comput Biol Med. 2022 Jan;140:105111. doi: 10.1016/j.compbiomed.2021.105111. Epub 2021 Dec 4.
5
A review of explainable AI in the satellite data, deep machine learning, and human poverty domain.
Patterns (N Y). 2022 Oct 14;3(10):100600. doi: 10.1016/j.patter.2022.100600.
6
Explainability of deep learning models in medical video analysis: a survey.
PeerJ Comput Sci. 2023 Mar 14;9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023.
8
Interpretability of Machine Learning Solutions in Public Healthcare: The CRISP-ML Approach.
Front Big Data. 2021 May 26;4:660206. doi: 10.3389/fdata.2021.660206. eCollection 2021.
10
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.

引用本文的文献

本文引用的文献

2
The Current and Future State of AI Interpretation of Medical Images.
N Engl J Med. 2023 May 25;388(21):1981-1990. doi: 10.1056/NEJMra2301725.
3
Topological data analysis in medical imaging: current state of the art.
Insights Imaging. 2023 Apr 1;14(1):58. doi: 10.1186/s13244-023-01413-w.
4
Pathologist Validation of a Machine Learning-Derived Feature for Colon Cancer Risk Stratification.
JAMA Netw Open. 2023 Mar 1;6(3):e2254891. doi: 10.1001/jamanetworkopen.2022.54891.
5
Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic Review and Future Direction.
Diagnostics (Basel). 2023 Jan 3;13(1):161. doi: 10.3390/diagnostics13010161.
6
Anatomically interpretable deep learning of brain age captures domain-specific cognitive impairment.
Proc Natl Acad Sci U S A. 2023 Jan 10;120(2):e2214634120. doi: 10.1073/pnas.2214634120. Epub 2023 Jan 3.
7
Personalized visual encoding model construction with small data.
Commun Biol. 2022 Dec 17;5(1):1382. doi: 10.1038/s42003-022-04347-z.
8
Machine learning based multi-modal prediction of future decline toward Alzheimer's disease: An empirical study.
PLoS One. 2022 Nov 16;17(11):e0277322. doi: 10.1371/journal.pone.0277322. eCollection 2022.
9
CheXGAT: A disease correlation-aware network for thorax disease diagnosis from chest X-ray images.
Artif Intell Med. 2022 Oct;132:102382. doi: 10.1016/j.artmed.2022.102382. Epub 2022 Aug 27.
10
Explainable multiple abnormality classification of chest CT volumes.
Artif Intell Med. 2022 Oct;132:102372. doi: 10.1016/j.artmed.2022.102372. Epub 2022 Aug 12.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验