• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过整合可解释性方法和置信度估计方法,提高神经影像分类的透明度。

Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches.

作者信息

Ellis Charles A, Miller Robyn L, Calhoun Vince D

机构信息

Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, 313 Ferst Dr NW, Atlanta, 30332, GA, United States.

Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, 55 Park Pl NE, Atlanta, GA, 30303, United States.

出版信息

Inform Med Unlocked. 2023;37. doi: 10.1016/j.imu.2023.101176. Epub 2023 Jan 18.

DOI:10.1016/j.imu.2023.101176
PMID:37035832
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10078989/
Abstract

The field of neuroimaging has increasingly sought to develop artificial intelligence-based models for neurological and neuropsychiatric disorder automated diagnosis and clinical decision support. However, if these models are to be implemented in a clinical setting, transparency will be vital. Two aspects of transparency are (1) confidence estimation and (2) explainability. Confidence estimation approaches indicate confidence in individual predictions. Explainability methods give insight into the importance of features to model predictions. In this study, we integrate confidence estimation and explainability approaches for the first time. We demonstrate their viability for schizophrenia diagnosis using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We compare two confidence estimation approaches: Monte Carlo dropout (MCD) and MC batch normalization (MCBN). We combine them with two gradient-based explainability approaches, saliency and layer-wise relevance propagation (LRP), and examine their effects upon explanations. We find that MCD often adversely affects model gradients, making it ill-suited for integration with gradient-based explainability methods. In contrast, MCBN does not affect model gradients. Additionally, we find many participant-level differences between regular explanations and the distributions of explanations for combined explainability and confidence estimation approaches. This suggests that a similar confidence estimation approach used in a clinical context with explanations only output for the regular model would likely not yield adequate explanations. We hope that our findings will provide a starting point for the integration of the two fields, provide useful guidance for future studies, and accelerate the development of transparent neuroimaging clinical decision support systems.

摘要

神经影像学领域越来越多地寻求开发基于人工智能的模型,用于神经和神经精神疾病的自动诊断及临床决策支持。然而,如果要在临床环境中应用这些模型,透明度将至关重要。透明度的两个方面是:(1)置信度估计和(2)可解释性。置信度估计方法表明对个体预测的置信度。可解释性方法能深入了解特征对模型预测的重要性。在本研究中,我们首次将置信度估计和可解释性方法进行整合。我们使用静息态功能磁共振成像(rs-fMRI)动态功能网络连接性(dFNC)数据,证明了它们在精神分裂症诊断中的可行性。我们比较了两种置信度估计方法:蒙特卡洛随机失活(MCD)和蒙特卡洛批归一化(MCBN)。我们将它们与两种基于梯度的可解释性方法——显著性和逐层相关性传播(LRP)相结合,并研究它们对解释的影响。我们发现,MCD常常会对模型梯度产生不利影响,使其不适于与基于梯度的可解释性方法相结合。相比之下,MCBN不会影响模型梯度。此外,我们发现常规解释与结合了可解释性和置信度估计方法后的解释分布之间存在许多参与者层面的差异。这表明,在临床环境中使用仅为常规模型输出解释的类似置信度估计方法,可能无法产生足够的解释。我们希望我们的研究结果将为这两个领域的整合提供一个起点,为未来的研究提供有用的指导,并加速透明神经影像学临床决策支持系统的开发。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/11e3c8d67d7a/nihms-1879285-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/f7819be19c08/nihms-1879285-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/2132b4eb6885/nihms-1879285-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/b0fcf172ae2a/nihms-1879285-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/941026d233e3/nihms-1879285-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/dea3cf302c5f/nihms-1879285-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/4787edb422cf/nihms-1879285-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/699b36095bb1/nihms-1879285-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/c3db678fb4ad/nihms-1879285-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/11e3c8d67d7a/nihms-1879285-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/f7819be19c08/nihms-1879285-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/2132b4eb6885/nihms-1879285-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/b0fcf172ae2a/nihms-1879285-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/941026d233e3/nihms-1879285-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/dea3cf302c5f/nihms-1879285-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/4787edb422cf/nihms-1879285-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/699b36095bb1/nihms-1879285-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/c3db678fb4ad/nihms-1879285-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c850/10078989/11e3c8d67d7a/nihms-1879285-f0009.jpg

相似文献

1
Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches.通过整合可解释性方法和置信度估计方法,提高神经影像分类的透明度。
Inform Med Unlocked. 2023;37. doi: 10.1016/j.imu.2023.101176. Epub 2023 Jan 18.
2
Explainability in medicine in an era of AI-based clinical decision support systems.基于人工智能的临床决策支持系统时代的医学可解释性。
Front Genet. 2022 Sep 19;13:903600. doi: 10.3389/fgene.2022.903600. eCollection 2022.
3
Provision and evaluation of explanations within an automated planning-based approach to solving the multimorbidity problem.基于自动化规划方法解决多病共存问题中的解释提供和评估。
J Biomed Inform. 2024 Aug;156:104681. doi: 10.1016/j.jbi.2024.104681. Epub 2024 Jul 1.
4
Development of an explainable artificial intelligence model for Asian vascular wound images.亚洲血管性创面图像可解释人工智能模型的开发。
Int Wound J. 2024 Apr;21(4):e14565. doi: 10.1111/iwj.14565. Epub 2023 Dec 25.
5
Explainable fuzzy clustering framework reveals divergent default mode network connectivity dynamics in schizophrenia.可解释的模糊聚类框架揭示了精神分裂症中默认模式网络连接动力学的差异。
Front Psychiatry. 2024 Feb 15;15:1165424. doi: 10.3389/fpsyt.2024.1165424. eCollection 2024.
6
Novel methods for elucidating modality importance in multimodal electrophysiology classifiers.阐明多模态电生理分类器中模态重要性的新方法。
Front Neuroinform. 2023 Mar 15;17:1123376. doi: 10.3389/fninf.2023.1123376. eCollection 2023.
7
Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis.医学成像中基于显著性的可解释深度学习:架起视觉可解释性与统计定量分析之间的桥梁。
BioData Min. 2024 Jun 22;17(1):18. doi: 10.1186/s13040-024-00370-4.
8
A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease.基于可解释人工智能的阿尔茨海默病多层次多模态检测和预测模型。
Sci Rep. 2021 Jan 29;11(1):2660. doi: 10.1038/s41598-021-82098-3.
9
Quantifying decision support level of explainable automatic classification of diagnoses in Spanish medical records.定量评估西班牙语病历中可解释的自动诊断分类的决策支持水平。
Comput Biol Med. 2024 Nov;182:109127. doi: 10.1016/j.compbiomed.2024.109127. Epub 2024 Sep 12.
10
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.

引用本文的文献

1
Pairing explainable deep learning classification with clustering to uncover effects of schizophrenia upon whole brain functional network connectivity dynamics.将可解释的深度学习分类与聚类相结合,以揭示精神分裂症对全脑功能网络连接动力学的影响。
Neuroimage Rep. 2023 Sep 29;3(4):100186. doi: 10.1016/j.ynirp.2023.100186. eCollection 2023 Dec.
2
Magnetic resonance imaging-based machine learning classification of schizophrenia spectrum disorders: a meta-analysis.基于磁共振成像的精神分裂症谱系障碍机器学习分类:一项荟萃分析。
Psychiatry Clin Neurosci. 2024 Dec;78(12):732-743. doi: 10.1111/pcn.13736. Epub 2024 Sep 18.
3

本文引用的文献

1
Novel methods for elucidating modality importance in multimodal electrophysiology classifiers.阐明多模态电生理分类器中模态重要性的新方法。
Front Neuroinform. 2023 Mar 15;17:1123376. doi: 10.3389/fninf.2023.1123376. eCollection 2023.
2
The link between static and dynamic brain functional network connectivity and genetic risk of Alzheimer's disease.静息态和动态脑功能网络连接与阿尔茨海默病遗传风险之间的关系。
Neuroimage Clin. 2023;37:103363. doi: 10.1016/j.nicl.2023.103363. Epub 2023 Feb 27.
3
An Unsupervised Feature Learning Approach for Elucidating Hidden Dynamics in rs-fMRI Functional Network Connectivity.
Explainable fuzzy clustering framework reveals divergent default mode network connectivity dynamics in schizophrenia.
可解释的模糊聚类框架揭示了精神分裂症中默认模式网络连接动力学的差异。
Front Psychiatry. 2024 Feb 15;15:1165424. doi: 10.3389/fpsyt.2024.1165424. eCollection 2024.
4
Explainable Fuzzy Clustering Framework Reveals Divergent Default Mode Network Connectivity Dynamics in Schizophrenia.可解释的模糊聚类框架揭示了精神分裂症中默认模式网络连接的不同动态变化。
bioRxiv. 2023 Feb 14:2023.02.13.528329. doi: 10.1101/2023.02.13.528329.
5
A Novel Explainable Fuzzy Clustering Approach for fMRI Dynamic Functional Network Connectivity Analysis.一种用于功能磁共振成像动态功能网络连接分析的新型可解释模糊聚类方法。
bioRxiv. 2023 Jan 31:2023.01.29.526110. doi: 10.1101/2023.01.29.526110.
一种揭示 rs-fMRI 功能网络连接中隐藏动态的无监督特征学习方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:4449-4452. doi: 10.1109/EMBC48229.2022.9871548.
4
Interpreting models interpreting brain dynamics.解读模型解读大脑动态。
Sci Rep. 2022 Jul 21;12(1):12023. doi: 10.1038/s41598-022-15539-2.
5
A Systematic Approach for Explaining Time and Frequency Features Extracted by Convolutional Neural Networks From Raw Electroencephalography Data.一种用于解释卷积神经网络从原始脑电图数据中提取的时间和频率特征的系统方法。
Front Neuroinform. 2022 May 31;16:872035. doi: 10.3389/fninf.2022.872035. eCollection 2022.
6
SSPNet: An interpretable 3D-CNN for classification of schizophrenia using phase maps of resting-state complex-valued fMRI data.SSPNet:一种基于静息态复值 fMRI 相位图的精神分裂症分类的可解释 3D-CNN。
Med Image Anal. 2022 Jul;79:102430. doi: 10.1016/j.media.2022.102430. Epub 2022 Mar 24.
7
Attention module improves both performance and interpretability of four-dimensional functional magnetic resonance imaging decoding neural network.注意模块提高了四维功能磁共振成像解码神经网络的性能和可解释性。
Hum Brain Mapp. 2022 Jun 1;43(8):2683-2692. doi: 10.1002/hbm.25813. Epub 2022 Feb 25.
8
Diagnosis of Schizophrenia Based on Deep Learning Using fMRI.基于 fMRI 的深度学习在精神分裂症诊断中的应用。
Comput Math Methods Med. 2021 Nov 9;2021:8437260. doi: 10.1155/2021/8437260. eCollection 2021.
9
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
10
BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis.脑图神经网络:用于 fMRI 分析的可解释脑图神经网络。
Med Image Anal. 2021 Dec;74:102233. doi: 10.1016/j.media.2021.102233. Epub 2021 Sep 12.