• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能:神经影像数据应用综述

Explainable AI: A review of applications to neuroimaging data.

作者信息

Farahani Farzad V, Fiok Krzysztof, Lahijanian Behshad, Karwowski Waldemar, Douglas Pamela K

机构信息

Department of Biostatistics, Johns Hopkins University, Baltimore, MD, United States.

Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL, United States.

出版信息

Front Neurosci. 2022 Dec 1;16:906290. doi: 10.3389/fnins.2022.906290. eCollection 2022.

DOI:10.3389/fnins.2022.906290
PMID:36583102
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9793854/
Abstract

Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the "black box" and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a capacity. We then focus on reviewing recent applications of relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.

摘要

深度神经网络(DNN)已经改变了计算机视觉领域,目前构成了一些用于学习人类大脑分层处理表示的最佳模型。在医学成像中,这些模型在多种疾病的早期诊断中表现出了人类水平甚至更高的性能。然而,目标通常不仅是准确预测群体归属或进行诊断,还在于提供能在人类易于理解的背景下支持模型决策的解释。透明度有限阻碍了DNN算法在许多领域的应用。已经开发了许多可解释人工智能(XAI)技术来窥视“黑匣子”并理解DNN模型,采取了有些不同的方法。在此,我们建议可以根据解释目标来考虑这些方法,包括功能或机制解释、开发原型类实例,或以某种能力评估训练模型上某些特征或映射的相关性。然后我们专注于回顾相关性技术在神经影像数据中的最新应用。此外,本文提出了一种比较XAI方法可靠性的方法,特别是在深度神经网络中,以及它们的优点和缺陷。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/301eb10c9c3c/fnins-16-906290-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/67891f403625/fnins-16-906290-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/345591a820bf/fnins-16-906290-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/680d5f5c0486/fnins-16-906290-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/a0714672b83d/fnins-16-906290-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/a93700611d0b/fnins-16-906290-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/301eb10c9c3c/fnins-16-906290-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/67891f403625/fnins-16-906290-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/345591a820bf/fnins-16-906290-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/680d5f5c0486/fnins-16-906290-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/a0714672b83d/fnins-16-906290-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/a93700611d0b/fnins-16-906290-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/60eb/9793854/301eb10c9c3c/fnins-16-906290-g0006.jpg

相似文献

1
Explainable AI: A review of applications to neuroimaging data.可解释人工智能:神经影像数据应用综述
Front Neurosci. 2022 Dec 1;16:906290. doi: 10.3389/fnins.2022.906290. eCollection 2022.
2
Survey of Explainable AI Techniques in Healthcare.医疗保健领域可解释人工智能技术调查。
Sensors (Basel). 2023 Jan 5;23(2):634. doi: 10.3390/s23020634.
3
Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.基于深度神经网络的生物医学成像可解释人工智能技术综述。
Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18.
4
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
5
Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization.人工智能:肿瘤放射组学中的深度学习及其可解释性和数据协调的挑战。
Phys Med. 2021 Mar;83:108-121. doi: 10.1016/j.ejmp.2021.03.009. Epub 2021 Mar 22.
6
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他
Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.
7
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
8
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.医学成像中的可解释人工智能:临床从业者概述——超越基于显著性的可解释人工智能方法
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.
9
Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging.用于磁共振成像的可解释人工智能的最新进展
Diagnostics (Basel). 2023 Apr 27;13(9):1571. doi: 10.3390/diagnostics13091571.
10
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.一种新的脑机接口 (BCI) 和基于 Grad-CAM 的可解释人工智能方法:智能医疗保健用例场景。
J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7.

引用本文的文献

1
AI and mental health: evaluating supervised machine learning models trained on diagnostic classifications.人工智能与心理健康:评估基于诊断分类训练的监督式机器学习模型
AI Soc. 2025;40(6):5077-5086. doi: 10.1007/s00146-024-02012-z. Epub 2024 Aug 2.
2
Leveraging AI-Driven Neuroimaging Biomarkers for Early Detection and Social Function Prediction in Autism Spectrum Disorders: A Systematic Review.利用人工智能驱动的神经影像生物标志物进行自闭症谱系障碍的早期检测和社会功能预测:一项系统综述。
Healthcare (Basel). 2025 Jul 22;13(15):1776. doi: 10.3390/healthcare13151776.
3
Explainable Artificial Intelligence in Neuroimaging of Alzheimer's Disease.

本文引用的文献

1
Respond-CAM: Analyzing Deep Models for 3D Imaging Data by Visualizations.Respond-CAM:通过可视化分析3D成像数据的深度模型
Med Image Comput Comput Assist Interv. 2018 Sep;11070:485-492. doi: 10.1007/978-3-030-00928-1_55. Epub 2018 Sep 26.
2
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis.深度学习在医学影像分析中的可解释人工智能(XAI)。
Med Image Anal. 2022 Jul;79:102470. doi: 10.1016/j.media.2022.102470. Epub 2022 May 4.
3
Patch individual filter layers in CNNs to harness the spatial homogeneity of neuroimaging data.
阿尔茨海默病神经影像学中的可解释人工智能
Diagnostics (Basel). 2025 Mar 4;15(5):612. doi: 10.3390/diagnostics15050612.
4
Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.用于神经系统疾病诊断放射学的可解释人工智能:系统评价及医生对此的看法。
Diagnostics (Basel). 2025 Jan 13;15(2):168. doi: 10.3390/diagnostics15020168.
5
Explainable brain age prediction: a comparative evaluation of morphometric and deep learning pipelines.可解释的脑龄预测:形态测量和深度学习流程的比较评估
Brain Inform. 2024 Dec 18;11(1):33. doi: 10.1186/s40708-024-00244-9.
6
Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence.迈向透明医疗:推进可解释人工智能中的局部解释方法
Bioengineering (Basel). 2024 Apr 12;11(4):369. doi: 10.3390/bioengineering11040369.
7
Neuroimage analysis using artificial intelligence approaches: a systematic review.基于人工智能的神经影像学分析:系统综述。
Med Biol Eng Comput. 2024 Sep;62(9):2599-2627. doi: 10.1007/s11517-024-03097-w. Epub 2024 Apr 26.
8
Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification.推进皮肤病诊断:用于增强皮肤病变分类的可解释人工智能。
Diagnostics (Basel). 2024 Apr 2;14(7):753. doi: 10.3390/diagnostics14070753.
9
Large-Scale Neuroimaging of Mental Illness.大规模神经影像学在精神疾病中的应用。
Curr Top Behav Neurosci. 2024;68:371-397. doi: 10.1007/7854_2024_462.
10
Spatiotemporal cortical dynamics for visual scene processing as revealed by EEG decoding.脑电图解码揭示的视觉场景处理的时空皮层动力学
Front Neurosci. 2023 Nov 1;17:1167719. doi: 10.3389/fnins.2023.1167719. eCollection 2023.
在 CNN 中逐片修改滤波器层以利用神经影像学数据的空间同质性。
Sci Rep. 2021 Dec 27;11(1):24447. doi: 10.1038/s41598-021-03785-9.
4
An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans.用于从 CT 扫描中自动进行 COVID-19 评估和病变分类的可解释人工智能系统。
Artif Intell Med. 2021 Aug;118:102114. doi: 10.1016/j.artmed.2021.102114. Epub 2021 May 21.
5
Checklist for responsible deep learning modeling of medical images based on COVID-19 detection studies.基于COVID-19检测研究的医学图像负责任深度学习建模清单。
Pattern Recognit. 2021 Oct;118:108035. doi: 10.1016/j.patcog.2021.108035. Epub 2021 May 21.
6
An Explainable 3D Residual Self-Attention Deep Neural Network for Joint Atrophy Localization and Alzheimer's Disease Diagnosis Using Structural MRI.基于结构 MRI 的可解释性 3D 残差自注意力深度神经网络用于联合萎缩定位和阿尔茨海默病诊断
IEEE J Biomed Health Inform. 2022 Nov;26(11):5289-5297. doi: 10.1109/JBHI.2021.3066832. Epub 2022 Nov 10.
7
Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer.解释图卷积神经网络决策:乳腺癌转移预测中与患者特异性相关的分子子网络。
Genome Med. 2021 Mar 11;13(1):42. doi: 10.1186/s13073-021-00845-7.
8
A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease.基于可解释人工智能的阿尔茨海默病多层次多模态检测和预测模型。
Sci Rep. 2021 Jan 29;11(1):2660. doi: 10.1038/s41598-021-82098-3.
9
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.
10
Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI.利用深度学习和功能磁共振成像对自闭症谱系障碍进行脑生物标志物解读
Med Image Comput Comput Assist Interv. 2018 Sep;11072:206-214. doi: 10.1007/978-3-030-00931-1_24. Epub 2018 Sep 13.