文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.

作者信息

Yang Guang, Ye Qinghao, Xia Jun

机构信息

National Heart and Lung Institute, Imperial College London, London, UK.

Royal Brompton Hospital, London, UK.

出版信息

Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.


DOI:10.1016/j.inffus.2021.07.016
PMID:34980946
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8459787/
Abstract

Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at how AI systems' choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.

摘要

可解释人工智能(XAI)是机器学习领域一个新兴的研究课题,旨在探究人工智能系统如何做出决策。该研究领域审视决策过程中涉及的方法和模型,并寻求能够清晰解释这些方法和模型的解决方案。许多机器学习算法无法表明决策是如何做出的以及为何如此做出。当前使用的最流行的深度神经网络方法尤其如此。因此,这些模型缺乏可解释性会阻碍我们对人工智能系统的信任。尽管总体而言,这些深度神经网络能够在性能上带来显著提升,但对于由深度学习驱动的应用而言,尤其是医学和医疗保健研究领域,XAI变得越来越关键。大多数现有人工智能系统缺乏可解释性和透明度,这可能是人工智能工具难以成功应用并融入日常临床实践的主要原因之一。在本研究中,我们首先调研了XAI的当前进展,特别是其在医疗保健应用方面的进展。然后,我们介绍了利用多模态和多中心数据融合实现XAI的解决方案,并随后在两个基于真实临床场景的案例中进行了验证。全面的定量和定性分析能够证明我们所提出的XAI解决方案的有效性,由此我们可以设想其在更广泛的临床问题中成功应用的前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/27c5437d16a4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d57a1e6ee719/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/77f81dba04cf/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/bd011dd9d541/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/deeece30dad6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/58a547fc3339/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5ef7742e9255/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5a475072a279/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/ff06d6a1e64c/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/98135b28ec0a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/f2fab80bf7aa/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d767758fc406/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/043c53d9aba4/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/8b55d95f19b2/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/64772599f66a/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/22bd0902e51d/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/1982df6d4bb1/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/be46247d733f/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/27c5437d16a4/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d57a1e6ee719/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/77f81dba04cf/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/bd011dd9d541/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/deeece30dad6/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/58a547fc3339/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5ef7742e9255/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/5a475072a279/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/ff06d6a1e64c/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/98135b28ec0a/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/f2fab80bf7aa/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/d767758fc406/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/043c53d9aba4/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/8b55d95f19b2/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/64772599f66a/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/22bd0902e51d/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/1982df6d4bb1/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/be46247d733f/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7db3/8459787/622e5054ba51/gr19.jpg

相似文献

[1]
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.

Inf Fusion. 2022-1

[2]
BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data.

Comput Biol Med. 2025-6

[3]
Demystifying the black box: A survey on explainable artificial intelligence (XAI) in bioinformatics.

Comput Struct Biotechnol J. 2025-1-10

[4]
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.

Eur J Radiol. 2023-5

[5]
Explainable AI for Bioinformatics: Methods, Tools and Applications.

Brief Bioinform. 2023-9-20

[6]
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.

Diagnostics (Basel). 2022-1-19

[7]
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.

J Med Internet Res. 2024-12-24

[8]
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.

Front Artif Intell. 2023-12-4

[9]
Applications of and issues with machine learning in medicine: Bridging the gap with explainable AI.

Biosci Trends. 2025-1-14

[10]
Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.

PLoS One. 2024

引用本文的文献

[1]
Machine learning for myocarditis diagnosis using cardiovascular magnetic resonance: a systematic review, diagnostic test accuracy meta-analysis, and comparison with human physicians.

Int J Cardiovasc Imaging. 2025-9-9

[2]
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Sci Rep. 2025-8-29

[3]
Development and Validation of a Machine Learning-Based Screening Algorithm to Predict High-Risk Hepatitis C Infection.

Open Forum Infect Dis. 2025-8-15

[4]
Artificial Intelligence Applications in Emergency Toxicology: Advancements and Challenges.

J Med Internet Res. 2025-8-22

[5]
A Comprehensive Comparison and Evaluation of AI-Powered Healthcare Mobile Applications' Usability.

Healthcare (Basel). 2025-7-26

[6]
Explainable semi-supervised model for predicting invasion depth of esophageal squamous cell carcinoma based on the IPCL and AVA patterns.

Sci Rep. 2025-7-2

[7]
Medical digital twins: enabling precision medicine and medical artificial intelligence.

Lancet Digit Health. 2025-6-14

[8]
Development and validation of an interpretable nomogram for predicting the risk of the prolonged postoperative length of stay for tuberculous spondylitis: a novel approach for risk stratification.

BMC Musculoskelet Disord. 2025-6-2

[9]
Beyond Biomarkers: Machine Learning-Driven Multiomics for Personalized Medicine in Gastric Cancer.

J Pers Med. 2025-4-24

[10]
Development of Explainable Machine Learning Models to Identify Patients at Risk for 1-Year Mortality and New Distant Metastases Postendoprosthetic Reconstruction for Lower Extremity Bone Tumors: A Secondary Analysis of the PARITY Trial.

JB JS Open Access. 2025-5-22

本文引用的文献

[1]
Respond-CAM: Analyzing Deep Models for 3D Imaging Data by Visualizations.

Med Image Comput Comput Assist Interv. 2018-9

[2]
Deep ROC Analysis and AUC as Balanced Average Accuracy, for Improved Classifier Selection, Audit and Explanation.

IEEE Trans Pattern Anal Mach Intell. 2023-1

[3]
The three ghosts of medical AI: Can the black-box present deliver?

Artif Intell Med. 2022-2

[4]
Machine Learning for COVID-19 Diagnosis and Prognostication: Lessons for Amplifying the Signal While Reducing the Noise.

Radiol Artif Intell. 2021-3-24

[5]
Human Evaluation of Models Built for Interpretability.

Proc AAAI Conf Hum Comput Crowdsourc. 2019

[6]
Artificial intelligence in breast ultrasonography.

Ultrasonography. 2021-4

[7]
Explainable AI: A Review of Machine Learning Interpretability Methods.

Entropy (Basel). 2020-12-25

[8]
Auto-Encoding and Distilling Scene Graphs for Image Captioning.

IEEE Trans Pattern Anal Mach Intell. 2022-5

[9]
COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.

Sci Rep. 2020-11-11

[10]
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.

IEEE Trans Neural Netw Learn Syst. 2021-11

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索