Suppr超能文献

通过转移矩阵和用户友好型特征实现医疗保健领域的可解释深度学习。

Toward explainable deep learning in healthcare through transition matrix and user-friendly features.

作者信息

Barmak Oleksander, Krak Iurii, Yakovlev Sergiy, Manziuk Eduard, Radiuk Pavlo, Kuznetsov Vladislav

机构信息

Department of Computer Science, Khmelnytskyi National University, Khmelnytskyi, Ukraine.

Department of Theoretical Cybernetics, Taras Shevchenko National University of Kyiv, Kyiv, Ukraine.

出版信息

Front Artif Intell. 2024 Nov 25;7:1482141. doi: 10.3389/frai.2024.1482141. eCollection 2024.

Abstract

Modern artificial intelligence (AI) solutions often face challenges due to the "black box" nature of deep learning (DL) models, which limits their transparency and trustworthiness in critical medical applications. In this study, we propose and evaluate a scalable approach based on a transition matrix to enhance the interpretability of DL models in medical signal and image processing by translating complex model decisions into user-friendly and justifiable features for healthcare professionals. The criteria for choosing interpretable features were clearly defined, incorporating clinical guidelines and expert rules to align model outputs with established medical standards. The proposed approach was tested on two medical datasets: electrocardiography (ECG) for arrhythmia detection and magnetic resonance imaging (MRI) for heart disease classification. The performance of the DL models was compared with expert annotations using Cohen's Kappa coefficient to assess agreement, achieving coefficients of 0.89 for the ECG dataset and 0.80 for the MRI dataset. These results demonstrate strong agreement, underscoring the reliability of the approach in providing accurate, understandable, and justifiable explanations of DL model decisions. The scalability of the approach suggests its potential applicability across various medical domains, enhancing the generalizability and utility of DL models in healthcare while addressing practical challenges and ethical considerations.

摘要

现代人工智能(AI)解决方案由于深度学习(DL)模型的“黑箱”性质,常常面临挑战,这限制了它们在关键医疗应用中的透明度和可信度。在本研究中,我们提出并评估了一种基于转移矩阵的可扩展方法,通过将复杂的模型决策转化为对医疗保健专业人员来说用户友好且合理的特征,来增强DL模型在医学信号和图像处理中的可解释性。选择可解释特征的标准得到了明确界定,纳入了临床指南和专家规则,以使模型输出与既定的医学标准保持一致。所提出的方法在两个医学数据集上进行了测试:用于心律失常检测的心电图(ECG)和用于心脏病分类的磁共振成像(MRI)。使用科恩卡帕系数将DL模型的性能与专家注释进行比较,以评估一致性,ECG数据集的系数为0.89,MRI数据集的系数为0.80。这些结果表明高度一致,强调了该方法在为DL模型决策提供准确、可理解且合理的解释方面的可靠性。该方法的可扩展性表明其在各个医学领域的潜在适用性,在解决实际挑战和伦理考量的同时,增强了DL模型在医疗保健中的通用性和实用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9224/11625760/bbb25d3bcf4b/frai-07-1482141-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验