Suppr超能文献

一种使用生成性反事实可解释人工智能(GCX)的用于可解释人工智能心电图的新型可解释人工智能框架。

A novel XAI framework for explainable AI-ECG using generative counterfactual XAI (GCX).

作者信息

Jang Jong-Hwan, Jo Yong-Yeon, Kang Sora, Son Jeong Min, Lee Hak Seung, Kwon Joon-Myoung, Lee Min Sung

机构信息

Medical AI Co., Ltd., 38, Yeongdong-daero 85-gil, Gangnam-gu, Seoul, Republic of Korea.

Artificial Intelligence and Big Data Research Center, Sejong Medical Research Institute, Bucheon, Republic of Korea.

出版信息

Sci Rep. 2025 Jul 2;15(1):23608. doi: 10.1038/s41598-025-08080-5.

Abstract

Generative Counterfactual Explainable Artificial Intelligence (XAI) offers a novel approach to understanding how AI models interpret electrocardiograms (ECGs). Traditional explanation methods focus on highlighting important ECG segments but often fail to clarify why these segments matter or how their alteration affects model predictions. In contrast, the proposed framework explores "what-if" scenarios, generating counterfactual ECGs that increase or decrease a model's predictive values. This approach has the potential to increase clinicians' trust specific changes-such as increased T wave amplitude or PR interval prolongation-influence the model's decisions. Through a series of validation experiments, the framework demonstrates its ability to produce counterfactual ECGs that closely align with established clinical knowledge, including characteristic alterations associated with potassium imbalances and atrial fibrillation. By clearly visualizing how incremental modifications in ECG morphology and rhythm affect artificial intelligence-applied ECG (AI-ECG) predictions, this generative counterfactual method moves beyond static attribution maps and has the potential to increase clinicians' trust in AI-ECG systems. As a result, this approach offers a promising path toward enhancing the explainability and clinical reliability of AI-based tools for cardiovascular diagnostics.

摘要

生成式反事实可解释人工智能(XAI)提供了一种全新的方法来理解人工智能模型如何解读心电图(ECG)。传统的解释方法侧重于突出重要的心电图片段,但往往无法阐明这些片段为何重要,或者它们的改变如何影响模型预测。相比之下,所提出的框架探索“如果……会怎样”的情景,生成能够增加或降低模型预测值的反事实心电图。这种方法有可能增强临床医生对特定变化(如T波振幅增加或PR间期延长)影响模型决策的信任。通过一系列验证实验,该框架展示了其生成与既定临床知识紧密相符的反事实心电图的能力,这些知识包括与钾离子失衡和心房颤动相关的特征性改变。通过清晰地可视化心电图形态和节律的渐进性改变如何影响人工智能应用心电图(AI-ECG)预测,这种生成式反事实方法超越了静态归因图,并有潜力增强临床医生对AI-ECG系统的信任。因此,这种方法为提高基于人工智能的心血管诊断工具的可解释性和临床可靠性提供了一条充满希望的途径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d86e/12223168/4c1e35153710/41598_2025_8080_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验