Suppr超能文献

NeuroNet19:一种基于磁共振成像数据的脑肿瘤分类可解释深度神经网络模型。

NeuroNet19: an explainable deep neural network model for the classification of brain tumors using magnetic resonance imaging data.

机构信息

Computer Science and Engineering, BGC Trust University Bangladesh, Chittagong, Bangladesh.

Computer Science and Engineering Discipline, Khulna University, Khulna, 9208, Bangladesh.

出版信息

Sci Rep. 2024 Jan 17;14(1):1524. doi: 10.1038/s41598-024-51867-1.

Abstract

Brain tumors (BTs) are one of the deadliest diseases that can significantly shorten a person's life. In recent years, deep learning has become increasingly popular for detecting and classifying BTs. In this paper, we propose a deep neural network architecture called NeuroNet19. It utilizes VGG19 as its backbone and incorporates a novel module named the Inverted Pyramid Pooling Module (iPPM). The iPPM captures multi-scale feature maps, ensuring the extraction of both local and global image contexts. This enhances the feature maps produced by the backbone, regardless of the spatial positioning or size of the tumors. To ensure the model's transparency and accountability, we employ Explainable AI. Specifically, we use Local Interpretable Model-Agnostic Explanations (LIME), which highlights the features or areas focused on while predicting individual images. NeuroNet19 is trained on four classes of BTs: glioma, meningioma, no tumor, and pituitary tumors. It is tested on a public dataset containing 7023 images. Our research demonstrates that NeuroNet19 achieves the highest accuracy at 99.3%, with precision, recall, and F1 scores at 99.2% and a Cohen Kappa coefficient (CKC) of 99%.

摘要

脑肿瘤(BTs)是一种致命的疾病,它可以显著缩短人的寿命。近年来,深度学习在检测和分类 BTs 方面变得越来越流行。在本文中,我们提出了一种名为 NeuroNet19 的深度神经网络架构。它利用 VGG19 作为其骨干,并结合了一种名为“Inverted Pyramid Pooling Module(iPPM)”的新颖模块。iPPM 可以捕获多尺度特征图,确保提取局部和全局图像上下文。这增强了骨干网络产生的特征图,无论肿瘤的空间定位或大小如何。为了确保模型的透明度和可解释性,我们采用了可解释 AI。具体来说,我们使用 Local Interpretable Model-Agnostic Explanations(LIME),它突出了在预测单个图像时关注的特征或区域。NeuroNet19 在四类 BTs 上进行了训练:胶质瘤、脑膜瘤、无肿瘤和垂体瘤。它在一个包含 7023 张图像的公共数据集上进行了测试。我们的研究表明,NeuroNet19 实现了最高的准确率 99.3%,其精度、召回率和 F1 得分为 99.2%,Cohen Kappa 系数(CKC)为 99%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a71b/10794704/e335ed2109fa/41598_2024_51867_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验