• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过基于可解释人工智能的关键特征识别实现用于脑肿瘤检测和分类的可解释卷积神经网络。

Explainable CNN for brain tumor detection and classification through XAI based key features identification.

作者信息

Iftikhar Shagufta, Anjum Nadeem, Siddiqui Abdul Basit, Ur Rehman Masood, Ramzan Naeem

机构信息

Department of Computer Science, Capital University of Science and Technology, Islamabad, Pakistan.

James Watt School of Engineering, University of Glasgow, Glasgow, G12 8QQ, UK.

出版信息

Brain Inform. 2025 Apr 30;12(1):10. doi: 10.1186/s40708-025-00257-y.

DOI:10.1186/s40708-025-00257-y
PMID:40304860
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12044100/
Abstract

Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.

摘要

尽管脑肿瘤分类取得了显著进展,但许多现有模型结构复杂,难以解释。这种复杂性会阻碍决策过程的透明度,导致模型依赖无关特征或正常软组织。此外,这些模型通常包含额外的层和参数,这进一步使分类过程复杂化。我们的工作通过引入一种将可解释人工智能(XAI)技术与卷积神经网络(CNN)架构相结合的新方法来解决这些局限性。本文的主要贡献在于确保模型专注于肿瘤检测和分类的最相关特征,同时通过最小化层数来降低复杂性。这种方法提高了模型的透明度和鲁棒性,通过诸如梯度加权类激活映射(Grad-Cam)、沙普利值加法解释(Shap)和局部可解释模型无关解释(LIME)等XAI技术,能够清晰洞察其决策过程。此外,该方法表现出更好的性能,在可见数据上达到99%的准确率,在不可见数据上达到95%,突出了其泛化性和可靠性。这种简单性、可解释性和高精度的平衡代表了脑肿瘤分类的重大进展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/716602b079b1/40708_2025_257_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/e9075cf17edf/40708_2025_257_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/a6749da4e770/40708_2025_257_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/8a4c225eeb3d/40708_2025_257_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/02b1d06727fe/40708_2025_257_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/1cd5aa71e85c/40708_2025_257_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/a2a6f7a21991/40708_2025_257_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/5ac006dff16f/40708_2025_257_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/21a18b0e589b/40708_2025_257_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/fa1e1b9a5007/40708_2025_257_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/716602b079b1/40708_2025_257_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/e9075cf17edf/40708_2025_257_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/a6749da4e770/40708_2025_257_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/8a4c225eeb3d/40708_2025_257_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/02b1d06727fe/40708_2025_257_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/1cd5aa71e85c/40708_2025_257_Figd_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/a2a6f7a21991/40708_2025_257_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/5ac006dff16f/40708_2025_257_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/21a18b0e589b/40708_2025_257_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/fa1e1b9a5007/40708_2025_257_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/12044100/716602b079b1/40708_2025_257_Fig9_HTML.jpg

相似文献

1
Explainable CNN for brain tumor detection and classification through XAI based key features identification.通过基于可解释人工智能的关键特征识别实现用于脑肿瘤检测和分类的可解释卷积神经网络。
Brain Inform. 2025 Apr 30;12(1):10. doi: 10.1186/s40708-025-00257-y.
2
Utilizing customized CNN for brain tumor prediction with explainable AI.利用定制的卷积神经网络结合可解释人工智能进行脑肿瘤预测。
Heliyon. 2024 Oct 9;10(20):e38997. doi: 10.1016/j.heliyon.2024.e38997. eCollection 2024 Oct 30.
3
Towards Explainable Detection of Alzheimer's Disease: A Fusion of Deep Convolutional Neural Network and Enhanced Weighted Fuzzy C-Mean.迈向阿尔茨海默病的可解释性检测:深度卷积神经网络与增强加权模糊C均值的融合
Curr Med Imaging. 2024;20:e15734056317205. doi: 10.2174/0115734056317205241014060633.
4
ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.ResViT融合网络模型:一种用于视网膜图像中糖尿病视网膜病变自动分级的可解释人工智能驱动方法。
Comput Biol Med. 2025 Mar;186:109656. doi: 10.1016/j.compbiomed.2025.109656. Epub 2025 Jan 16.
5
Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach.混凝土裂缝检测与分离:一种基于特征融合、裂缝隔离和可解释人工智能的方法。
J Imaging. 2024 Aug 31;10(9):215. doi: 10.3390/jimaging10090215.
6
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.一种新的脑机接口 (BCI) 和基于 Grad-CAM 的可解释人工智能方法:智能医疗保健用例场景。
J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7.
7
An explainable AI-based blood cell classification using optimized convolutional neural network.一种基于可解释人工智能的血细胞分类方法,采用优化的卷积神经网络。
J Pathol Inform. 2024 Jul 2;15:100389. doi: 10.1016/j.jpi.2024.100389. eCollection 2024 Dec.
8
ALL-Net: integrating CNN and explainable-AI for enhanced diagnosis and interpretation of acute lymphoblastic leukemia.ALL-Net:整合卷积神经网络与可解释人工智能以增强急性淋巴细胞白血病的诊断与解读
PeerJ Comput Sci. 2025 Jan 30;11:e2600. doi: 10.7717/peerj-cs.2600. eCollection 2025.
9
Revolutionizing breast ultrasound diagnostics with EfficientNet-B7 and Explainable AI.利用 EfficientNet-B7 和可解释 AI 技术革新乳腺超声诊断。
BMC Med Imaging. 2024 Sep 2;24(1):230. doi: 10.1186/s12880-024-01404-3.
10
An Explainable AI Paradigm for Alzheimer's Diagnosis Using Deep Transfer Learning.一种基于深度迁移学习的可解释人工智能阿尔茨海默病诊断范式。
Diagnostics (Basel). 2024 Feb 5;14(3):345. doi: 10.3390/diagnostics14030345.

引用本文的文献

1
A Web-Deployed, Explainable AI System for Comprehensive Brain Tumor Diagnosis.一种用于全面脑肿瘤诊断的基于网络部署的可解释人工智能系统。
Neurol Int. 2025 Aug 4;17(8):121. doi: 10.3390/neurolint17080121.

本文引用的文献

1
Precision meets generalization: Enhancing brain tumor classification via pretrained DenseNet with global average pooling and hyperparameter tuning.精度与泛化能力兼顾:通过预训练的带有全局平均池化和超参数调优的 DenseNet 提升脑肿瘤分类。
PLoS One. 2024 Sep 6;19(9):e0307825. doi: 10.1371/journal.pone.0307825. eCollection 2024.
2
Brain Tumor Classification from MRI Using Image Enhancement and Convolutional Neural Network Techniques.基于图像增强和卷积神经网络技术的磁共振成像脑肿瘤分类
Brain Sci. 2023 Sep 14;13(9):1320. doi: 10.3390/brainsci13091320.
3
TumorDetNet: A unified deep learning model for brain tumor detection and classification.
肿瘤检测网络:用于脑肿瘤检测和分类的统一深度学习模型。
PLoS One. 2023 Sep 27;18(9):e0291200. doi: 10.1371/journal.pone.0291200. eCollection 2023.
4
Explainable Convolutional Neural Networks for Brain Cancer Detection and Localisation.可解释卷积神经网络在脑癌检测和定位中的应用。
Sensors (Basel). 2023 Sep 2;23(17):7614. doi: 10.3390/s23177614.
5
Classification of Brain Tumours in MRI Images using a Convolutional Neural Network.基于卷积神经网络的 MRI 图像脑肿瘤分类。
Curr Med Imaging. 2024;20:e270323214998. doi: 10.2174/1573405620666230327124902.
6
Refined Automatic Brain Tumor Classification Using Hybrid Convolutional Neural Networks for MRI Scans.使用混合卷积神经网络对MRI扫描进行精确的自动脑肿瘤分类
Diagnostics (Basel). 2023 Feb 23;13(5):864. doi: 10.3390/diagnostics13050864.
7
Multiple Brain Tumor Classification with Dense CNN Architecture Using Brain MRI Images.使用脑部磁共振成像(MRI)图像的密集卷积神经网络(CNN)架构进行多脑肿瘤分类
Life (Basel). 2023 Jan 28;13(2):349. doi: 10.3390/life13020349.
8
Computer-Aided Early Melanoma Brain-Tumor Detection Using Deep-Learning Approach.使用深度学习方法的计算机辅助早期黑色素瘤脑肿瘤检测
Biomedicines. 2023 Jan 11;11(1):184. doi: 10.3390/biomedicines11010184.
9
Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling.基于视觉Transformer 集成的磁共振成像脑肿瘤分类。
Curr Oncol. 2022 Oct 7;29(10):7498-7511. doi: 10.3390/curroncol29100590.
10
Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data.基于MRI图像数据的用于预测脑肿瘤状态的解释驱动深度学习模型
Front Genet. 2022 Mar 14;13:822666. doi: 10.3389/fgene.2022.822666. eCollection 2022.