• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MiMICRI:迈向心血管图像分类模型以领域为中心的反事实解释

MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models.

作者信息

Guo Grace, Deng Lifu, Tandon Animesh, Endert Alex, Kwon Bum Chul

机构信息

Georgia Institute of Technology Atlanta, Georgia, USA.

Cleveland Clinic Cleveland, Ohio, USA.

出版信息

FACCT 24 (2024). 2024 Jun;2024:1861-1874. doi: 10.1145/3630106.3659011. Epub 2024 Jun 5.

DOI:10.1145/3630106.3659011
PMID:39877054
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11774553/
Abstract

The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.

摘要

最近,可公开获取的大型医学影像数据集的普及,促使用于心血管图像分类和分析的人工智能(AI)模型大量涌现。与此同时,这些模型可能产生的重大影响,推动了一系列可解释人工智能(XAI)方法的发展,这些方法旨在解释给定某些图像输入时的模型预测。然而,这些方法中的许多并未与领域专家共同开发或评估,并且解释也未根据医学专业知识或领域知识进行情境化处理。在本文中,我们提出了一个新颖的框架和Python库MiMICRI,它为心血管图像分类模型提供以领域为中心的反事实解释。MiMICRI帮助用户交互式地选择和替换与形态结构相对应的医学图像片段。然后,从生成的反事实中,用户可以评估每个片段对模型预测的影响,并根据已知的医学事实验证模型。我们与两位医学专家对这个库进行了评估。我们的评估表明,以领域为中心的XAI方法可以增强模型解释的可解释性,并帮助专家根据相关领域知识对模型进行推理。然而,对于生成的反事实的临床合理性也出现了一些担忧。我们最后讨论了MiMICRI框架的可推广性和可信度,以及我们的发现对医疗保健背景下以领域为中心的XAI方法用于模型可解释性发展的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/b6875693d179/nihms-2021031-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/7600b87441ad/nihms-2021031-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/3750e0c63bce/nihms-2021031-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/ebab377f9ae4/nihms-2021031-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/81a538de0a71/nihms-2021031-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/ee7726666877/nihms-2021031-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/b6875693d179/nihms-2021031-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/7600b87441ad/nihms-2021031-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/3750e0c63bce/nihms-2021031-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/ebab377f9ae4/nihms-2021031-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/81a538de0a71/nihms-2021031-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/ee7726666877/nihms-2021031-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c89/11774553/b6875693d179/nihms-2021031-f0006.jpg

相似文献

1
MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models.MiMICRI:迈向心血管图像分类模型以领域为中心的反事实解释
FACCT 24 (2024). 2024 Jun;2024:1861-1874. doi: 10.1145/3630106.3659011. Epub 2024 Jun 5.
2
Using generative AI to investigate medical imagery models and datasets.利用生成式人工智能研究医学影像模型和数据集。
EBioMedicine. 2024 Apr;102:105075. doi: 10.1016/j.ebiom.2024.105075. Epub 2024 Apr 1.
3
Human-centered evaluation of explainable AI applications: a systematic review.以人类为中心的可解释人工智能应用评估:一项系统综述。
Front Artif Intell. 2024 Oct 17;7:1456486. doi: 10.3389/frai.2024.1456486. eCollection 2024.
4
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
5
CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks.CLARUS:一个用于图神经网络中人工反事实的交互式可解释 AI 平台。
J Biomed Inform. 2024 Feb;150:104600. doi: 10.1016/j.jbi.2024.104600. Epub 2024 Jan 30.
6
From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks.从局部反事实推断到全局特征重要性:脑连接网络的高效、稳健且与模型无关的解释。
Comput Methods Programs Biomed. 2023 Jun;236:107550. doi: 10.1016/j.cmpb.2023.107550. Epub 2023 Apr 16.
7
GANterfactual-Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning.使用生成对抗学习为医学非专业人员提供反事实-反事实解释。
Front Artif Intell. 2022 Apr 8;5:825565. doi: 10.3389/frai.2022.825565. eCollection 2022.
8
Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification.临床领域知识衍生模板可提高气胸分类事后人工智能解释的质量。
J Biomed Inform. 2024 Aug;156:104673. doi: 10.1016/j.jbi.2024.104673. Epub 2024 Jun 9.
9
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
10
Explaining the black-box smoothly-A counterfactual approach.黑盒解释的平滑化——反事实方法。
Med Image Anal. 2023 Feb;84:102721. doi: 10.1016/j.media.2022.102721. Epub 2022 Dec 13.

本文引用的文献

1
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers.SimpleClick:使用简单视觉Transformer的交互式图像分割
Proc IEEE Int Conf Comput Vis. 2023 Oct;2023:22233-22243. doi: 10.1109/iccv51070.2023.02037.
2
Heterogeneity and predictors of the effects of AI assistance on radiologists.人工智能辅助对放射科医生影响的异质性和预测因素。
Nat Med. 2024 Mar;30(3):837-849. doi: 10.1038/s41591-024-02850-w. Epub 2024 Mar 19.
3
Segment anything in medical images.在医学图像中分割任何内容。
Nat Commun. 2024 Jan 22;15(1):654. doi: 10.1038/s41467-024-44824-z.
4
Impossibility theorems for feature attribution.特征归因的不可能定理。
Proc Natl Acad Sci U S A. 2024 Jan 9;121(2):e2304406120. doi: 10.1073/pnas.2304406120. Epub 2024 Jan 5.
5
Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation.用于医学成像的通用视觉基础模型:以零样本医学分割中的分割一切模型为例
Diagnostics (Basel). 2023 Jun 2;13(11):1947. doi: 10.3390/diagnostics13111947.
6
Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models.可解释人工智能与心脏成像:迈向更具解释力的模型
Circ Cardiovasc Imaging. 2023 Apr;16(4):e014519. doi: 10.1161/CIRCIMAGING.122.014519. Epub 2023 Apr 12.
7
Ethics and governance of trustworthy medical artificial intelligence.可信医疗人工智能的伦理与治理。
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
8
Machine Learning in Cardiovascular Imaging: A Scoping Review of Published Literature.心血管成像中的机器学习:已发表文献的范围综述
Curr Radiol Rep. 2023;11(2):34-45. doi: 10.1007/s40134-022-00407-8. Epub 2022 Dec 12.
9
Artificial intelligence in cardiac imaging: where we are and what we want.心脏成像中的人工智能:我们所处的位置以及我们的目标。
Eur Heart J. 2023 Feb 14;44(7):541-543. doi: 10.1093/eurheartj/ehac700.
10
Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review.可解释的医学影像人工智能需要以人类为中心的设计:系统评价的指南与证据
NPJ Digit Med. 2022 Oct 19;5(1):156. doi: 10.1038/s41746-022-00699-2.