Suppr超能文献

IMPA-Net:用于从磁共振成像进行可靠脑肿瘤分类的可解释多部分注意力网络。

IMPA-Net: Interpretable Multi-Part Attention Network for Trustworthy Brain Tumor Classification from MRI.

作者信息

Xie Yuting, Zaccagna Fulvio, Rundo Leonardo, Testa Claudia, Zhu Ruifeng, Tonon Caterina, Lodi Raffaele, Manners David Neil

机构信息

Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy.

Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy.

出版信息

Diagnostics (Basel). 2024 May 11;14(10):997. doi: 10.3390/diagnostics14100997.

Abstract

Deep learning (DL) networks have shown attractive performance in medical image processing tasks such as brain tumor classification. However, they are often criticized as mysterious "black boxes". The opaqueness of the model and the reasoning process make it difficult for health workers to decide whether to trust the prediction outcomes. In this study, we develop an interpretable multi-part attention network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness of classification outcomes. The proposed model not only predicts the tumor grade but also provides a global explanation for the model interpretability and a local explanation as justification for the proffered prediction. Global explanation is represented as a group of feature patterns that the model learns to distinguish high-grade glioma (HGG) and low-grade glioma (LGG) classes. Local explanation interprets the reasoning process of an individual prediction by calculating the similarity between the prototypical parts of the image and a group of pre-learned task-related features. Experiments conducted on the BraTS2017 dataset demonstrate that IMPA-Net is a verifiable model for the classification task. A percentage of 86% of feature patterns were assessed by two radiologists to be valid for representing task-relevant medical features. The model shows a classification accuracy of 92.12%, of which 81.17% were evaluated as trustworthy based on local explanations. Our interpretable model is a trustworthy model that can be used for decision aids for glioma classification. Compared with black-box CNNs, it allows health workers and patients to understand the reasoning process and trust the prediction outcomes.

摘要

深度学习(DL)网络在诸如脑肿瘤分类等医学图像处理任务中表现出了诱人的性能。然而,它们常常被诟病为神秘的“黑匣子”。模型的不透明性以及推理过程使得医护人员难以决定是否信任预测结果。在本研究中,我们开发了一种用于脑肿瘤分类的可解释多部分注意力网络(IMPA-Net),以增强分类结果的可解释性和可信度。所提出的模型不仅能预测肿瘤级别,还能为模型可解释性提供全局解释,并为所提供的预测提供局部解释作为依据。全局解释表现为模型学习到的用于区分高级别胶质瘤(HGG)和低级别胶质瘤(LGG)类别的一组特征模式。局部解释通过计算图像的原型部分与一组预先学习的与任务相关的特征之间的相似度来解释单个预测的推理过程。在BraTS2017数据集上进行的实验表明,IMPA-Net是分类任务的一个可验证模型。两名放射科医生评估得出,86%的特征模式对于表示与任务相关的医学特征是有效的。该模型的分类准确率为92.12%,其中基于局部解释,81.17%的预测被评估为可信。我们的可解释模型是一个可信赖的模型,可用于胶质瘤分类的决策辅助。与黑盒卷积神经网络相比,它使医护人员和患者能够理解推理过程并信任预测结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10bf/11119919/975b3bfc4291/diagnostics-14-00997-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验