• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A Framework for Interpretability in Machine Learning for Medical Imaging.医学成像机器学习中的可解释性框架。
IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.
2
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
3
Definitions, methods, and applications in interpretable machine learning.可解释机器学习中的定义、方法和应用。
Proc Natl Acad Sci U S A. 2019 Oct 29;116(44):22071-22080. doi: 10.1073/pnas.1900654116. Epub 2019 Oct 16.
4
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.用于医学图像分析的深度神经网络透明度:可解释性方法综述
Comput Biol Med. 2022 Jan;140:105111. doi: 10.1016/j.compbiomed.2021.105111. Epub 2021 Dec 4.
5
A review of explainable AI in the satellite data, deep machine learning, and human poverty domain.卫星数据、深度机器学习和人类贫困领域中可解释人工智能的综述。
Patterns (N Y). 2022 Oct 14;3(10):100600. doi: 10.1016/j.patter.2022.100600.
6
Explainability of deep learning models in medical video analysis: a survey.医学视频分析中深度学习模型的可解释性:一项综述。
PeerJ Comput Sci. 2023 Mar 14;9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023.
7
Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis.医学成像中基于显著性的可解释深度学习:架起视觉可解释性与统计定量分析之间的桥梁。
BioData Min. 2024 Jun 22;17(1):18. doi: 10.1186/s13040-024-00370-4.
8
Interpretability of Machine Learning Solutions in Public Healthcare: The CRISP-ML Approach.公共医疗保健中机器学习解决方案的可解释性:CRISP-ML方法。
Front Big Data. 2021 May 26;4:660206. doi: 10.3389/fdata.2021.660206. eCollection 2021.
9
Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency.通过探索模型的可解释性、可解释性和数据-模型依赖性,实现机器学习在放射肿瘤学中的安全高效临床应用。
Phys Med Biol. 2022 May 27;67(11). doi: 10.1088/1361-6560/ac678a.
10
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.

引用本文的文献

1
Progress in the application of machine learning in CT diagnosis of acute appendicitis.机器学习在急性阑尾炎CT诊断中的应用进展
Abdom Radiol (NY). 2025 Mar 17. doi: 10.1007/s00261-025-04864-5.
2
Generating Novel Brain Morphology by Deforming Learned Templates.通过变形学习模板生成新型脑形态。
ArXiv. 2025 Mar 7:arXiv:2503.03778v2.
3
Interpretable machine learning to evaluate relationships between DAO/DAOA (pLG72) protein data and features in clinical assessments, functional outcome, and cognitive function in schizophrenia patients.可解释的机器学习,用于评估精神分裂症患者中DAO/DAOA(pLG72)蛋白数据与临床评估、功能结局及认知功能特征之间的关系。
Schizophrenia (Heidelb). 2025 Feb 22;11(1):27. doi: 10.1038/s41537-024-00548-z.
4
Causality and scientific explanation of artificial intelligence systems in biomedicine.生物医学中人工智能系统的因果关系与科学解释。
Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29.

本文引用的文献

1
Artificial intelligence for breast cancer detection in screening mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study.瑞典筛查性乳腺钼靶摄影中用于乳腺癌检测的人工智能:一项前瞻性、基于人群、配对读者、非劣效性研究。
Lancet Digit Health. 2023 Oct;5(10):e703-e711. doi: 10.1016/S2589-7500(23)00153-X. Epub 2023 Sep 8.
2
The Current and Future State of AI Interpretation of Medical Images.医学图像人工智能解读的现状与未来发展态势
N Engl J Med. 2023 May 25;388(21):1981-1990. doi: 10.1056/NEJMra2301725.
3
Topological data analysis in medical imaging: current state of the art.医学成像中的拓扑数据分析:当前技术现状
Insights Imaging. 2023 Apr 1;14(1):58. doi: 10.1186/s13244-023-01413-w.
4
Pathologist Validation of a Machine Learning-Derived Feature for Colon Cancer Risk Stratification.机器学习衍生特征在结肠癌风险分层中的病理学家验证。
JAMA Netw Open. 2023 Mar 1;6(3):e2254891. doi: 10.1001/jamanetworkopen.2022.54891.
5
Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic Review and Future Direction.基于深度学习的乳腺癌诊断方法:系统综述与未来方向
Diagnostics (Basel). 2023 Jan 3;13(1):161. doi: 10.3390/diagnostics13010161.
6
Anatomically interpretable deep learning of brain age captures domain-specific cognitive impairment.基于解剖结构可解释的深度学习模型预测大脑年龄,可捕捉到特定领域的认知障碍。
Proc Natl Acad Sci U S A. 2023 Jan 10;120(2):e2214634120. doi: 10.1073/pnas.2214634120. Epub 2023 Jan 3.
7
Personalized visual encoding model construction with small data.基于小数据的个性化视觉编码模型构建。
Commun Biol. 2022 Dec 17;5(1):1382. doi: 10.1038/s42003-022-04347-z.
8
Machine learning based multi-modal prediction of future decline toward Alzheimer's disease: An empirical study.基于机器学习的阿尔茨海默病未来衰退的多模态预测:一项实证研究。
PLoS One. 2022 Nov 16;17(11):e0277322. doi: 10.1371/journal.pone.0277322. eCollection 2022.
9
CheXGAT: A disease correlation-aware network for thorax disease diagnosis from chest X-ray images.CheXGAT:一种用于从胸部X光图像进行胸部疾病诊断的疾病关联感知网络。
Artif Intell Med. 2022 Oct;132:102382. doi: 10.1016/j.artmed.2022.102382. Epub 2022 Aug 27.
10
Explainable multiple abnormality classification of chest CT volumes.胸部CT容积数据的可解释性多异常分类
Artif Intell Med. 2022 Oct;132:102372. doi: 10.1016/j.artmed.2022.102372. Epub 2022 Aug 12.

医学成像机器学习中的可解释性框架。

A Framework for Interpretability in Machine Learning for Medical Imaging.

作者信息

Wang Alan Q, Karaman Batuhan K, Kim Heejong, Rosenthal Jacob, Saluja Rachit, Young Sean I, Sabuncu Mert R

机构信息

School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA.

Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA.

出版信息

IEEE Access. 2024;12:53277-53292. doi: 10.1109/access.2024.3387702. Epub 2024 Apr 11.

DOI:10.1109/access.2024.3387702
PMID:39421804
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11486155/
Abstract

Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.

摘要

医学成像中的机器学习模型可解释性(MLMI)是一个重要的研究方向。然而,对于可解释性的含义,人们普遍感到模糊不清。为什么MLMI中会出现对可解释性的需求?当需要可解释性时,人们实际上试图解决什么目标?为了回答这些问题,我们确定需要将MLMI中可解释性的目标和要素形式化。通过对医学图像分析及其与机器学习交叉领域中常见的现实世界任务和目标进行推理,我们确定了可解释性的五个核心要素:定位、视觉可识别性、物理归因、模型透明度和可操作性。由此,我们得出了一个MLMI可解释性框架,它可作为在此背景下处理可解释性的分步指南。总体而言,本文在医学成像背景下将可解释性需求形式化,我们的应用视角阐明了具体的MLMI特定目标和考虑因素,以指导方法设计并改善实际应用。我们的目标是为模型设计者和从业者提供实用且有指导意义的信息,激励医学成像领域的模型开发者更深入地思考可解释性正在实现的目标,并提出可解释性研究的未来方向。