• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于MRI图像数据的用于预测脑肿瘤状态的解释驱动深度学习模型

Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data.

作者信息

Gaur Loveleen, Bhandari Mohan, Razdan Tanvi, Mallik Saurav, Zhao Zhongming

机构信息

Amity International Business School, Amity University, Noida, India.

Nepal College of Information Technology, Lalitpur, Nepal.

出版信息

Front Genet. 2022 Mar 14;13:822666. doi: 10.3389/fgene.2022.822666. eCollection 2022.

DOI:10.3389/fgene.2022.822666
PMID:35360838
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8964286/
Abstract

Cancer research has seen explosive development exploring deep learning (DL) techniques for analysing magnetic resonance imaging (MRI) images for predicting brain tumours. We have observed a substantial gap in explanation, interpretability, and high accuracy for DL models. Consequently, we propose an explanation-driven DL model by utilising a convolutional neural network (CNN), local interpretable model-agnostic explanation (LIME), and Shapley additive explanation (SHAP) for the prediction of discrete subtypes of brain tumours (meningioma, glioma, and pituitary) using an MRI image dataset. Unlike previous models, our model used a dual-input CNN approach to prevail over the classification challenge with images of inferior quality in terms of noise and metal artifacts by adding Gaussian noise. Our CNN training results reveal 94.64% accuracy as compared to other state-of-the-art methods. We used SHAP to ensure consistency and local accuracy for interpretation as Shapley values examine all future predictions applying all possible combinations of inputs. In contrast, LIME constructs sparse linear models around each prediction to illustrate how the model operates in the immediate area. Our emphasis for this study is interpretability and high accuracy, which is critical for realising disparities in predictive performance, helpful in developing trust, and essential in integration into clinical practice. The proposed method has a vast clinical application that could potentially be used for mass screening in resource-constraint countries.

摘要

癌症研究在探索深度学习(DL)技术以分析磁共振成像(MRI)图像来预测脑肿瘤方面取得了爆炸性发展。我们已经观察到DL模型在解释性、可解释性和高精度方面存在巨大差距。因此,我们提出了一种由解释驱动的DL模型,该模型利用卷积神经网络(CNN)、局部可解释模型无关解释(LIME)和夏普利加法解释(SHAP),通过一个MRI图像数据集来预测脑肿瘤的离散亚型(脑膜瘤、胶质瘤和垂体瘤)。与先前的模型不同,我们的模型采用双输入CNN方法,通过添加高斯噪声来克服低质量图像(在噪声和金属伪影方面)的分类挑战。与其他现有技术方法相比,我们的CNN训练结果显示准确率达到了94.64%。我们使用SHAP来确保解释的一致性和局部准确性,因为夏普利值会检查应用所有可能输入组合的所有未来预测。相比之下,LIME围绕每个预测构建稀疏线性模型,以说明模型在紧邻区域的运行方式。我们这项研究的重点是可解释性和高精度,这对于认识预测性能差异、建立信任以及融入临床实践至关重要。所提出的方法具有广泛的临床应用潜力,可用于资源有限国家的大规模筛查。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/b6355c9a168f/fgene-13-822666-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/a89b5c685a17/fgene-13-822666-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/9b016ca479fb/fgene-13-822666-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/523e2b93e352/fgene-13-822666-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/50439accd416/fgene-13-822666-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/43d20bfa02ee/fgene-13-822666-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/b5106a945586/fgene-13-822666-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/b6355c9a168f/fgene-13-822666-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/a89b5c685a17/fgene-13-822666-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/9b016ca479fb/fgene-13-822666-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/523e2b93e352/fgene-13-822666-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/50439accd416/fgene-13-822666-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/43d20bfa02ee/fgene-13-822666-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/b5106a945586/fgene-13-822666-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fc9/8964286/b6355c9a168f/fgene-13-822666-g007.jpg

相似文献

1
Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data.基于MRI图像数据的用于预测脑肿瘤状态的解释驱动深度学习模型
Front Genet. 2022 Mar 14;13:822666. doi: 10.3389/fgene.2022.822666. eCollection 2022.
2
Interpretable AI for bio-medical applications.用于生物医学应用的可解释人工智能。
Complex Eng Syst. 2022 Dec;2(4). doi: 10.20517/ces.2022.41. Epub 2022 Dec 28.
3
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.用于图像分类问题的可解释机器学习框架:脑胶质瘤癌症预测案例研究
J Imaging. 2020 May 28;6(6):37. doi: 10.3390/jimaging6060037.
4
Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique.使用可解释人工智能(XAI)技术剖析食品配送服务评论的深度学习模型
Foods. 2022 Jul 8;11(14):2019. doi: 10.3390/foods11142019.
5
Spectral Zones-Based SHAP/LIME: Enhancing Interpretability in Spectral Deep Learning Models Through Grouped Feature Analysis.基于光谱区域的SHAP/LIME:通过分组特征分析增强光谱深度学习模型的可解释性。
Anal Chem. 2024 Oct 1;96(39):15588-15597. doi: 10.1021/acs.analchem.4c02329. Epub 2024 Sep 17.
6
Explainable deep learning model for automatic mulberry leaf disease classification.用于桑叶病害自动分类的可解释深度学习模型。
Front Plant Sci. 2023 Sep 19;14:1175515. doi: 10.3389/fpls.2023.1175515. eCollection 2023.
7
Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection.解读人工智能模型:关于局部可解释模型无关性解释(LIME)和SHapley值解释(SHAP)在阿尔茨海默病检测中应用的系统综述
Brain Inform. 2024 Apr 5;11(1):10. doi: 10.1186/s40708-024-00222-1.
8
Benchmarking the influence of pre-training on explanation performance in MR image classification.在磁共振图像分类中评估预训练对解释性能的影响。
Front Artif Intell. 2024 Feb 26;7:1330919. doi: 10.3389/frai.2024.1330919. eCollection 2024.
9
Predicting motor outcome in preterm infants from very early brain diffusion MRI using a deep learning convolutional neural network (CNN) model.利用深度学习卷积神经网络(CNN)模型从极早期脑弥散 MRI 预测早产儿的运动结局。
Neuroimage. 2020 Jul 15;215:116807. doi: 10.1016/j.neuroimage.2020.116807. Epub 2020 Apr 9.
10
Evaluating Retinal Disease Diagnosis with an Interpretable Lightweight CNN Model Resistant to Adversarial Attacks.使用抗对抗攻击的可解释轻量级卷积神经网络模型评估视网膜疾病诊断
J Imaging. 2023 Oct 11;9(10):219. doi: 10.3390/jimaging9100219.

引用本文的文献

1
Constructing multicancer risk cohorts using national data from medical helplines and secondary care.利用医疗求助热线和二级医疗保健的国家数据构建多癌风险队列。
NPJ Digit Med. 2025 Aug 27;8(1):551. doi: 10.1038/s41746-025-01855-0.
2
Explainable CNN for brain tumor detection and classification through XAI based key features identification.通过基于可解释人工智能的关键特征识别实现用于脑肿瘤检测和分类的可解释卷积神经网络。
Brain Inform. 2025 Apr 30;12(1):10. doi: 10.1186/s40708-025-00257-y.
3
The clinical implications and interpretability of computational medical imaging (radiomics) in brain tumors.

本文引用的文献

1
Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images.基于深度学习和注意力机制的 MRI 多模态脑图像脑肿瘤分割。
Sci Rep. 2021 May 25;11(1):10930. doi: 10.1038/s41598-021-90428-8.
2
Artificial intelligence and machine learning for medical imaging: A technology review.人工智能和机器学习在医学成像中的应用:技术综述。
Phys Med. 2021 Mar;83:242-256. doi: 10.1016/j.ejmp.2021.04.016. Epub 2021 May 9.
3
MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction.
计算医学成像(影像组学)在脑肿瘤中的临床意义及可解释性。
Insights Imaging. 2025 Mar 30;16(1):77. doi: 10.1186/s13244-025-01950-6.
4
Explainable Artificial Intelligence in Neuroimaging of Alzheimer's Disease.阿尔茨海默病神经影像学中的可解释人工智能
Diagnostics (Basel). 2025 Mar 4;15(5):612. doi: 10.3390/diagnostics15050612.
5
Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.用于神经系统疾病诊断放射学的可解释人工智能:系统评价及医生对此的看法。
Diagnostics (Basel). 2025 Jan 13;15(2):168. doi: 10.3390/diagnostics15020168.
6
Explainable artificial intelligence with UNet based segmentation and Bayesian machine learning for classification of brain tumors using MRI images.基于UNet分割和贝叶斯机器学习的可解释人工智能在利用MRI图像对脑肿瘤进行分类中的应用
Sci Rep. 2025 Jan 3;15(1):690. doi: 10.1038/s41598-024-84692-7.
7
A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI.医学图像分割的人工智能文献综述:从人工智能、可解释人工智能到可信人工智能
Quant Imaging Med Surg. 2024 Dec 5;14(12):9620-9652. doi: 10.21037/qims-24-723. Epub 2024 Nov 29.
8
Utilizing customized CNN for brain tumor prediction with explainable AI.利用定制的卷积神经网络结合可解释人工智能进行脑肿瘤预测。
Heliyon. 2024 Oct 9;10(20):e38997. doi: 10.1016/j.heliyon.2024.e38997. eCollection 2024 Oct 30.
9
Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches.使用后启发式方法提高脑肿瘤数据集中深度学习模型的可解释性
J Imaging. 2024 Sep 18;10(9):232. doi: 10.3390/jimaging10090232.
10
IMPA-Net: Interpretable Multi-Part Attention Network for Trustworthy Brain Tumor Classification from MRI.IMPA-Net:用于从磁共振成像进行可靠脑肿瘤分类的可解释多部分注意力网络。
Diagnostics (Basel). 2024 May 11;14(10):997. doi: 10.3390/diagnostics14100997.
MADGAN:基于多模态相邻脑 MRI 切片重建的无监督医学异常检测生成对抗网络。
BMC Bioinformatics. 2021 Apr 26;22(Suppl 2):31. doi: 10.1186/s12859-020-03936-1.
4
AI applications to medical images: From machine learning to deep learning.人工智能在医学图像中的应用:从机器学习到深度学习。
Phys Med. 2021 Mar;83:9-24. doi: 10.1016/j.ejmp.2021.02.006. Epub 2021 Mar 1.
5
Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries.《全球癌症统计数据 2020:全球 185 个国家和地区 36 种癌症的发病率和死亡率估计》。
CA Cancer J Clin. 2021 May;71(3):209-249. doi: 10.3322/caac.21660. Epub 2021 Feb 4.
6
Big data in psychiatry: multiomics, neuroimaging, computational modeling, and digital phenotyping.精神病学中的大数据:多组学、神经影像学、计算建模与数字表型分析
Neuropsychopharmacology. 2021 Jan;46(1):1-2. doi: 10.1038/s41386-020-00862-x. Epub 2020 Sep 12.