• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过转移矩阵和用户友好型特征实现医疗保健领域的可解释深度学习。

Toward explainable deep learning in healthcare through transition matrix and user-friendly features.

作者信息

Barmak Oleksander, Krak Iurii, Yakovlev Sergiy, Manziuk Eduard, Radiuk Pavlo, Kuznetsov Vladislav

机构信息

Department of Computer Science, Khmelnytskyi National University, Khmelnytskyi, Ukraine.

Department of Theoretical Cybernetics, Taras Shevchenko National University of Kyiv, Kyiv, Ukraine.

出版信息

Front Artif Intell. 2024 Nov 25;7:1482141. doi: 10.3389/frai.2024.1482141. eCollection 2024.

DOI:10.3389/frai.2024.1482141
PMID:39654544
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11625760/
Abstract

Modern artificial intelligence (AI) solutions often face challenges due to the "black box" nature of deep learning (DL) models, which limits their transparency and trustworthiness in critical medical applications. In this study, we propose and evaluate a scalable approach based on a transition matrix to enhance the interpretability of DL models in medical signal and image processing by translating complex model decisions into user-friendly and justifiable features for healthcare professionals. The criteria for choosing interpretable features were clearly defined, incorporating clinical guidelines and expert rules to align model outputs with established medical standards. The proposed approach was tested on two medical datasets: electrocardiography (ECG) for arrhythmia detection and magnetic resonance imaging (MRI) for heart disease classification. The performance of the DL models was compared with expert annotations using Cohen's Kappa coefficient to assess agreement, achieving coefficients of 0.89 for the ECG dataset and 0.80 for the MRI dataset. These results demonstrate strong agreement, underscoring the reliability of the approach in providing accurate, understandable, and justifiable explanations of DL model decisions. The scalability of the approach suggests its potential applicability across various medical domains, enhancing the generalizability and utility of DL models in healthcare while addressing practical challenges and ethical considerations.

摘要

现代人工智能(AI)解决方案由于深度学习(DL)模型的“黑箱”性质,常常面临挑战,这限制了它们在关键医疗应用中的透明度和可信度。在本研究中,我们提出并评估了一种基于转移矩阵的可扩展方法,通过将复杂的模型决策转化为对医疗保健专业人员来说用户友好且合理的特征,来增强DL模型在医学信号和图像处理中的可解释性。选择可解释特征的标准得到了明确界定,纳入了临床指南和专家规则,以使模型输出与既定的医学标准保持一致。所提出的方法在两个医学数据集上进行了测试:用于心律失常检测的心电图(ECG)和用于心脏病分类的磁共振成像(MRI)。使用科恩卡帕系数将DL模型的性能与专家注释进行比较,以评估一致性,ECG数据集的系数为0.89,MRI数据集的系数为0.80。这些结果表明高度一致,强调了该方法在为DL模型决策提供准确、可理解且合理的解释方面的可靠性。该方法的可扩展性表明其在各个医学领域的潜在适用性,在解决实际挑战和伦理考量的同时,增强了DL模型在医疗保健中的通用性和实用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/c2adbdefff72/frai-07-1482141-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/bbb25d3bcf4b/frai-07-1482141-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/34beb5659d06/frai-07-1482141-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/1372ef0b09d1/frai-07-1482141-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/054946619aa0/frai-07-1482141-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/7e79a8c3aa5c/frai-07-1482141-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/c2adbdefff72/frai-07-1482141-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/bbb25d3bcf4b/frai-07-1482141-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/34beb5659d06/frai-07-1482141-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/1372ef0b09d1/frai-07-1482141-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/054946619aa0/frai-07-1482141-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/7e79a8c3aa5c/frai-07-1482141-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/c2adbdefff72/frai-07-1482141-g006.jpg

相似文献

1
Toward explainable deep learning in healthcare through transition matrix and user-friendly features.通过转移矩阵和用户友好型特征实现医疗保健领域的可解释深度学习。
Front Artif Intell. 2024 Nov 25;7:1482141. doi: 10.3389/frai.2024.1482141. eCollection 2024.
2
Artificial intelligence in hospital infection prevention: an integrative review.医院感染预防中的人工智能:一项综合综述。
Front Public Health. 2025 Apr 2;13:1547450. doi: 10.3389/fpubh.2025.1547450. eCollection 2025.
3
Explainable, trustworthy, and ethical machine learning for healthcare: A survey.面向医疗保健的可解释、可信赖和合乎道德的机器学习:调查。
Comput Biol Med. 2022 Oct;149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7.
4
Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification.用于基于深度学习的医学影像分类的可解释人工智能(XAI)
J Imaging. 2023 Aug 30;9(9):177. doi: 10.3390/jimaging9090177.
5
Understanding the black-box: towards interpretable and reliable deep learning models.理解黑箱:迈向可解释且可靠的深度学习模型。
PeerJ Comput Sci. 2023 Nov 29;9:e1629. doi: 10.7717/peerj-cs.1629. eCollection 2023.
6
CEFEs: A CNN Explainable Framework for ECG Signals.CEFEs:用于心电图信号的 CNN 可解释框架。
Artif Intell Med. 2021 May;115:102059. doi: 10.1016/j.artmed.2021.102059. Epub 2021 Mar 26.
7
Annotation-efficient, patch-based, explainable deep learning using curriculum method for breast cancer detection in screening mammography.使用课程方法在乳腺钼靶筛查中进行乳腺癌检测的基于补丁的、可解释的深度学习,具有高效标注。
Insights Imaging. 2025 Mar 19;16(1):60. doi: 10.1186/s13244-025-01922-w.
8
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
9
Enhancing interpretability and accuracy of AI models in healthcare: a comprehensive review on challenges and future directions.提高医疗保健领域人工智能模型的可解释性和准确性:关于挑战与未来方向的全面综述
Front Robot AI. 2024 Nov 28;11:1444763. doi: 10.3389/frobt.2024.1444763. eCollection 2024.
10
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.一种新的脑机接口 (BCI) 和基于 Grad-CAM 的可解释人工智能方法:智能医疗保健用例场景。
J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7.

引用本文的文献

1
Artificial Intelligence in Cardiovascular Imaging: Current Landscape, Clinical Impact, and Future Directions.心血管成像中的人工智能:现状、临床影响及未来方向。
Discoveries (Craiova). 2025 Jun 30;13(1):e211. doi: 10.15190/d.2025.10. eCollection 2025 Apr-Jun.

本文引用的文献

1
Improvement of a prediction model for heart failure survival through explainable artificial intelligence.通过可解释人工智能改进心力衰竭生存预测模型。
Front Cardiovasc Med. 2023 Aug 1;10:1219586. doi: 10.3389/fcvm.2023.1219586. eCollection 2023.
2
Building a trustworthy AI differential diagnosis application for Crohn's disease and intestinal tuberculosis.为克罗恩病和肠结核构建值得信赖的人工智能辅助诊断应用。
BMC Med Inform Decis Mak. 2023 Aug 15;23(1):160. doi: 10.1186/s12911-023-02257-6.
3
Natural language processing: state of the art, current trends and challenges.
自然语言处理:技术现状、当前趋势与挑战。
Multimed Tools Appl. 2023;82(3):3713-3744. doi: 10.1007/s11042-022-13428-4. Epub 2022 Jul 14.
4
Explainable machine learning to predict long-term mortality in critically ill ventilated patients: a retrospective study in central Taiwan.可解释的机器学习用于预测重症通气患者的长期死亡率:台湾中部的一项回顾性研究
BMC Med Inform Decis Mak. 2022 Mar 25;22(1):75. doi: 10.1186/s12911-022-01817-6.
5
An Interpretable Object Detection-Based Model For The Diagnosis Of Neonatal Lung Diseases Using Ultrasound Images.基于可解释目标检测的新生儿肺部疾病超声图像诊断模型。
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:3029-3034. doi: 10.1109/EMBC46164.2021.9630169.
6
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.用于医学图像分析的深度神经网络透明度:可解释性方法综述
Comput Biol Med. 2022 Jan;140:105111. doi: 10.1016/j.compbiomed.2021.105111. Epub 2021 Dec 4.
7
NeuroKit2: A Python toolbox for neurophysiological signal processing.NeuroKit2:一个用于神经生理信号处理的 Python 工具包。
Behav Res Methods. 2021 Aug;53(4):1689-1696. doi: 10.3758/s13428-020-01516-y. Epub 2021 Feb 2.
8
Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved?深度学习技术在自动 MRI 心脏多结构分割与诊断中的应用:问题是否已解决?
IEEE Trans Med Imaging. 2018 Nov;37(11):2514-2525. doi: 10.1109/TMI.2018.2837502. Epub 2018 May 17.
9
Modifications to the HIPAA Privacy, Security, Enforcement, and Breach Notification rules under the Health Information Technology for Economic and Clinical Health Act and the Genetic Information Nondiscrimination Act; other modifications to the HIPAA rules.根据《经济和临床健康医疗信息技术法案》及《遗传信息非歧视法案》对《健康保险流通与责任法案》隐私、安全、执法及违规通知规则的修改;对《健康保险流通与责任法案》规则的其他修改。
Fed Regist. 2013 Jan 25;78(17):5565-702.