Suppr超能文献

放射心血管成像中的可解释人工智能——一项系统综述

Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.

作者信息

Haupt Matteo, Maurer Martin H, Thomas Rohit Philip

机构信息

Department of Diagnostic and Interventional Radiology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany.

出版信息

Diagnostics (Basel). 2025 May 31;15(11):1399. doi: 10.3390/diagnostics15111399.

Abstract

Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the "black box" nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques-such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps-to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. A total of 28 studies met the inclusion criteria. Ultrasound examinations ( = 9) and CT ( = 9) were the most common imaging modalities, followed by MRI ( = 6) and chest X-rays ( = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care.

摘要

人工智能(AI)和深度学习在心血管成像中的应用越来越广泛。然而,这些模型的“黑匣子”性质给临床信任和整合带来了挑战。可解释人工智能(XAI)旨在通过提供对模型决策的洞察来解决这些问题。本系统综述综合了当前关于在放射学心血管成像中使用XAI方法的研究。在PubMed、Scopus和Web of Science中进行了系统的文献检索,以识别2015年1月至2025年3月期间发表的同行评审的原创研究文章。如果研究将XAI技术(如梯度加权类激活映射(Grad-CAM)、夏普利加法解释(SHAPs)、局部可解释模型无关解释(LIMEs)或显著性图)应用于心血管成像模态,包括心脏计算机断层扫描(CT)、磁共振成像(MRI)、超声心动图和其他超声检查以及胸部X线(CXR),则纳入研究。专注于核医学、无成像的结构化/表格数据或缺乏具体可解释性特征的研究被排除。筛选和数据提取遵循PRISMA指南。共有28项研究符合纳入标准。超声检查(n = 9)和CT(n = 9)是最常见的成像模态,其次是MRI(n = 6)和胸部X线(n = 4)。临床应用包括疾病分类(如冠状动脉疾病和心脏瓣膜病)以及心肌或先天性异常的检测。Grad-CAM是最常用的XAI方法,其次是SHAP。大多数研究使用基于显著性的技术来生成模型预测的可视化解释。XAI在提高深度学习模型在心血管成像中的透明度和临床接受度方面具有很大的前景。然而,对XAI方法的评估在很大程度上仍然是定性的,并且缺乏标准化。未来的研究应侧重于对可解释性的稳健、定量评估、前瞻性临床验证以及开发超越基于显著性方法的更先进的XAI技术。加强AI模型的可解释性对于确保其安全、符合伦理和有效地整合到心血管护理中至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2d5b/12155260/90f400f6a7a1/diagnostics-15-01399-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验