Suppr超能文献

医学成像中基于显著性的可解释深度学习:架起视觉可解释性与统计定量分析之间的桥梁。

Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis.

作者信息

Brima Yusuf, Atemkeng Marcellin

机构信息

Computer Vision, Institute of Cognitive Science, Osnabrück University, Osnabrueck, D-49090, Lower Saxony, Germany.

Department of Mathematics, Rhodes University, Grahamstown, 6140, Eastern Cape, South Africa.

出版信息

BioData Min. 2024 Jun 22;17(1):18. doi: 10.1186/s13040-024-00370-4.

Abstract

Deep learning shows great promise for medical image analysis but often lacks explainability, hindering its adoption in healthcare. Attribution techniques that explain model reasoning can potentially increase trust in deep learning among clinical stakeholders. In the literature, much of the research on attribution in medical imaging focuses on visual inspection rather than statistical quantitative analysis.In this paper, we proposed an image-based saliency framework to enhance the explainability of deep learning models in medical image analysis. We use adaptive path-based gradient integration, gradient-free techniques, and class activation mapping along with its derivatives to attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.The proposed framework integrates qualitative and statistical quantitative assessments, employing Accuracy Information Curves (AICs) and Softmax Information Curves (SICs) to measure the effectiveness of saliency methods in retaining critical image information and their correlation with model predictions. Visual inspections indicate that methods such as ScoreCAM, XRAI, GradCAM, and GradCAM++ consistently produce focused and clinically interpretable attribution maps. These methods highlighted possible biomarkers, exposed model biases, and offered insights into the links between input features and predictions, demonstrating their ability to elucidate model reasoning on these datasets. Empirical evaluations reveal that ScoreCAM and XRAI are particularly effective in retaining relevant image regions, as reflected in their higher AUC values. However, SICs highlight variability, with instances of random saliency masks outperforming established methods, emphasizing the need for combining visual and empirical metrics for a comprehensive evaluation.The results underscore the importance of selecting appropriate saliency methods for specific medical imaging tasks and suggest that combining qualitative and quantitative approaches can enhance the transparency, trustworthiness, and clinical adoption of deep learning models in healthcare. This study advances model explainability to increase trust in deep learning among healthcare stakeholders by revealing the rationale behind predictions. Future research should refine empirical metrics for stability and reliability, include more diverse imaging modalities, and focus on improving model explainability to support clinical decision-making.

摘要

深度学习在医学图像分析中显示出巨大的前景,但往往缺乏可解释性,这阻碍了它在医疗保健领域的应用。能够解释模型推理的归因技术有可能增强临床利益相关者对深度学习的信任。在文献中,医学成像归因的许多研究都集中在视觉检查上,而不是统计定量分析。在本文中,我们提出了一个基于图像的显著性框架,以增强深度学习模型在医学图像分析中的可解释性。我们使用基于自适应路径的梯度积分、无梯度技术和类激活映射及其衍生方法,对最近的深度卷积神经网络模型对脑肿瘤磁共振成像(MRI)和新冠肺炎胸部X光数据集所做的预测进行归因。所提出的框架整合了定性和统计定量评估,采用准确性信息曲线(AIC)和Softmax信息曲线(SIC)来衡量显著性方法在保留关键图像信息方面的有效性及其与模型预测的相关性。视觉检查表明,诸如ScoreCAM、XRAI、GradCAM和GradCAM++等方法始终能生成聚焦且具有临床可解释性的归因图。这些方法突出了可能的生物标志物,揭示了模型偏差,并提供了关于输入特征与预测之间联系的见解,证明了它们在这些数据集上阐明模型推理的能力。实证评估表明,ScoreCAM和XRAI在保留相关图像区域方面特别有效,这在它们较高的AUC值中得到体现。然而,SIC突出了变异性,随机显著性掩码的实例优于既定方法的情况时有发生,这强调了结合视觉和实证指标进行全面评估的必要性。结果强调了为特定医学成像任务选择合适的显著性方法的重要性,并表明结合定性和定量方法可以提高深度学习模型在医疗保健中的透明度、可信度和临床应用。这项研究通过揭示预测背后的原理,推进了模型的可解释性,以增强医疗保健利益相关者对深度学习的信任。未来的研究应该完善稳定性和可靠性的实证指标,纳入更多样化的成像模态,并专注于提高模型的可解释性以支持临床决策。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2582/11193223/0a71d4ab7943/13040_2024_370_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验