• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.

作者信息

Yang Guang, Ye Qinghao, Xia Jun

机构信息

National Heart and Lung Institute, Imperial College London, London, UK.

Royal Brompton Hospital, London, UK.

出版信息

Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.

DOI:10.1016/j.inffus.2021.07.016
PMID:34980946
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8459787/
Abstract

Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at how AI systems' choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.

摘要

可解释人工智能(XAI)是机器学习领域一个新兴的研究课题,旨在探究人工智能系统如何做出决策。该研究领域审视决策过程中涉及的方法和模型,并寻求能够清晰解释这些方法和模型的解决方案。许多机器学习算法无法表明决策是如何做出的以及为何如此做出。当前使用的最流行的深度神经网络方法尤其如此。因此,这些模型缺乏可解释性会阻碍我们对人工智能系统的信任。尽管总体而言,这些深度神经网络能够在性能上带来显著提升,但对于由深度学习驱动的应用而言,尤其是医学和医疗保健研究领域,XAI变得越来越关键。大多数现有人工智能系统缺乏可解释性和透明度,这可能是人工智能工具难以成功应用并融入日常临床实践的主要原因之一。在本研究中,我们首先调研了XAI的当前进展,特别是其在医疗保健应用方面的进展。然后,我们介绍了利用多模态和多中心数据融合实现XAI的解决方案,并随后在两个基于真实临床场景的案例中进行了验证。全面的定量和定性分析能够证明我们所提出的XAI解决方案的有效性,由此我们可以设想其在更广泛的临床问题中成功应用的前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/27c5437d16a4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d57a1e6ee719/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/77f81dba04cf/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/bd011dd9d541/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/deeece30dad6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/58a547fc3339/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5ef7742e9255/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5a475072a279/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/ff06d6a1e64c/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/98135b28ec0a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/f2fab80bf7aa/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d767758fc406/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/043c53d9aba4/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/8b55d95f19b2/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/64772599f66a/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/22bd0902e51d/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/1982df6d4bb1/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/be46247d733f/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/27c5437d16a4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d57a1e6ee719/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/77f81dba04cf/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/bd011dd9d541/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/deeece30dad6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/58a547fc3339/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5ef7742e9255/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5a475072a279/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/ff06d6a1e64c/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/98135b28ec0a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/f2fab80bf7aa/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d767758fc406/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/043c53d9aba4/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/8b55d95f19b2/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/64772599f66a/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/22bd0902e51d/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/1982df6d4bb1/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/be46247d733f/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg

相似文献

1
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他
Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.
2
BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.BenchXAI:多模态生物医学数据上事后可解释人工智能方法的综合基准测试
Comput Biol Med. 2025 Jun;191:110124. doi: 10.1016/j.compbiomed.2025.110124. Epub 2025 Apr 15.
3
Demystifying the black box: A survey on explainable artificial intelligence (XAI) in bioinformatics.揭开黑箱之谜:生物信息学中可解释人工智能(XAI)的调查。
Comput Struct Biotechnol J. 2025 Jan 10;27:346-359. doi: 10.1016/j.csbj.2024.12.027. eCollection 2025.
4
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.医学成像中的可解释人工智能:临床从业者概述——超越基于显著性的可解释人工智能方法
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.
5
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
6
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
7
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
8
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
9
Applications of and issues with machine learning in medicine: Bridging the gap with explainable AI.机器学习在医学中的应用及问题:通过可解释人工智能缩小差距。
Biosci Trends. 2025 Jan 14;18(6):497-504. doi: 10.5582/bst.2024.01342. Epub 2024 Dec 8.
10
Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.从以人为中心的角度评估胸部放射影像学中的可解释人工智能(XAI)技术。
PLoS One. 2024 Oct 9;19(10):e0308758. doi: 10.1371/journal.pone.0308758. eCollection 2024.

引用本文的文献

1
Machine learning for myocarditis diagnosis using cardiovascular magnetic resonance: a systematic review, diagnostic test accuracy meta-analysis, and comparison with human physicians.使用心血管磁共振成像的机器学习用于心肌炎诊断:一项系统评价、诊断试验准确性的Meta分析以及与人类医生的比较
Int J Cardiovasc Imaging. 2025 Sep 9. doi: 10.1007/s10554-025-03497-5.
2
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.使用可解释人工智能的个性化健康监测:弥合对预测性医疗保健的信任差距。
Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.
3
Development and Validation of a Machine Learning-Based Screening Algorithm to Predict High-Risk Hepatitis C Infection.

本文引用的文献

1
Respond-CAM: Analyzing Deep Models for 3D Imaging Data by Visualizations.Respond-CAM:通过可视化分析3D成像数据的深度模型
Med Image Comput Comput Assist Interv. 2018 Sep;11070:485-492. doi: 10.1007/978-3-030-00928-1_55. Epub 2018 Sep 26.
2
Deep ROC Analysis and AUC as Balanced Average Accuracy, for Improved Classifier Selection, Audit and Explanation.深度受试者工作特征曲线(ROC)分析及作为平衡平均准确率的曲线下面积(AUC),用于改进分类器选择、审核与解释。
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):329-341. doi: 10.1109/TPAMI.2022.3145392. Epub 2022 Dec 5.
3
The three ghosts of medical AI: Can the black-box present deliver?
一种基于机器学习的用于预测丙型肝炎高风险感染的筛查算法的开发与验证
Open Forum Infect Dis. 2025 Aug 15;12(8):ofaf496. doi: 10.1093/ofid/ofaf496. eCollection 2025 Aug.
4
Artificial Intelligence Applications in Emergency Toxicology: Advancements and Challenges.人工智能在急诊毒理学中的应用:进展与挑战。
J Med Internet Res. 2025 Aug 22;27:e73121. doi: 10.2196/73121.
5
A Comprehensive Comparison and Evaluation of AI-Powered Healthcare Mobile Applications' Usability.人工智能驱动的医疗移动应用程序可用性的综合比较与评估
Healthcare (Basel). 2025 Jul 26;13(15):1829. doi: 10.3390/healthcare13151829.
6
Explainable semi-supervised model for predicting invasion depth of esophageal squamous cell carcinoma based on the IPCL and AVA patterns.基于IPCL和AVA模式预测食管鳞状细胞癌浸润深度的可解释半监督模型
Sci Rep. 2025 Jul 2;15(1):22519. doi: 10.1038/s41598-025-06172-w.
7
Medical digital twins: enabling precision medicine and medical artificial intelligence.医学数字孪生:推动精准医学与医学人工智能发展
Lancet Digit Health. 2025 Jun 14:100864. doi: 10.1016/j.landig.2025.02.004.
8
Development and validation of an interpretable nomogram for predicting the risk of the prolonged postoperative length of stay for tuberculous spondylitis: a novel approach for risk stratification.用于预测结核性脊柱炎术后住院时间延长风险的可解释列线图的开发与验证:一种新的风险分层方法
BMC Musculoskelet Disord. 2025 Jun 2;26(1):539. doi: 10.1186/s12891-025-08807-5.
9
Beyond Biomarkers: Machine Learning-Driven Multiomics for Personalized Medicine in Gastric Cancer.超越生物标志物:机器学习驱动的多组学在胃癌个性化医疗中的应用
J Pers Med. 2025 Apr 24;15(5):166. doi: 10.3390/jpm15050166.
10
Development of Explainable Machine Learning Models to Identify Patients at Risk for 1-Year Mortality and New Distant Metastases Postendoprosthetic Reconstruction for Lower Extremity Bone Tumors: A Secondary Analysis of the PARITY Trial.用于识别下肢骨肿瘤假体置换术后1年死亡和新发远处转移风险患者的可解释机器学习模型的开发:PARITY试验的二次分析
JB JS Open Access. 2025 May 22;10(2). doi: 10.2106/JBJS.OA.24.00213. eCollection 2025 Apr-Jun.
医疗 AI 的三“鬼”:黑箱能“显灵”吗?
Artif Intell Med. 2022 Feb;124:102158. doi: 10.1016/j.artmed.2021.102158. Epub 2021 Aug 28.
4
Machine Learning for COVID-19 Diagnosis and Prognostication: Lessons for Amplifying the Signal While Reducing the Noise.用于新冠病毒疾病诊断和预后预测的机器学习:在减少噪声的同时增强信号的经验教训。
Radiol Artif Intell. 2021 Mar 24;3(4):e210011. doi: 10.1148/ryai.2021210011. eCollection 2021 Jul.
5
Human Evaluation of Models Built for Interpretability.针对可解释性构建的模型的人工评估。
Proc AAAI Conf Hum Comput Crowdsourc. 2019;7(1):59-67. Epub 2019 Oct 28.
6
Artificial intelligence in breast ultrasonography.乳腺超声检查中的人工智能
Ultrasonography. 2021 Apr;40(2):183-190. doi: 10.14366/usg.20117. Epub 2020 Nov 12.
7
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.
8
Auto-Encoding and Distilling Scene Graphs for Image Captioning.自动编码和场景图蒸馏用于图像字幕生成。
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2313-2327. doi: 10.1109/TPAMI.2020.3042192. Epub 2022 Apr 1.
9
COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.COVID-Net:一种针对胸部 X 光图像中 COVID-19 病例检测的定制化深度卷积神经网络设计。
Sci Rep. 2020 Nov 11;10(1):19549. doi: 10.1038/s41598-020-76550-z.
10
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.