Suppr超能文献

通过基于可解释人工智能的关键特征识别实现用于脑肿瘤检测和分类的可解释卷积神经网络。

Explainable CNN for brain tumor detection and classification through XAI based key features identification.

作者信息

Iftikhar Shagufta, Anjum Nadeem, Siddiqui Abdul Basit, Ur Rehman Masood, Ramzan Naeem

机构信息

Department of Computer Science, Capital University of Science and Technology, Islamabad, Pakistan.

James Watt School of Engineering, University of Glasgow, Glasgow, G12 8QQ, UK.

出版信息

Brain Inform. 2025 Apr 30;12(1):10. doi: 10.1186/s40708-025-00257-y.

Abstract

Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.

摘要

尽管脑肿瘤分类取得了显著进展,但许多现有模型结构复杂,难以解释。这种复杂性会阻碍决策过程的透明度,导致模型依赖无关特征或正常软组织。此外,这些模型通常包含额外的层和参数,这进一步使分类过程复杂化。我们的工作通过引入一种将可解释人工智能(XAI)技术与卷积神经网络(CNN)架构相结合的新方法来解决这些局限性。本文的主要贡献在于确保模型专注于肿瘤检测和分类的最相关特征,同时通过最小化层数来降低复杂性。这种方法提高了模型的透明度和鲁棒性,通过诸如梯度加权类激活映射(Grad-Cam)、沙普利值加法解释(Shap)和局部可解释模型无关解释(LIME)等XAI技术,能够清晰洞察其决策过程。此外,该方法表现出更好的性能,在可见数据上达到99%的准确率,在不可见数据上达到95%,突出了其泛化性和可靠性。这种简单性、可解释性和高精度的平衡代表了脑肿瘤分类的重大进展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/e9075cf17edf/40708_2025_257_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验