Suppr超能文献

VGG-16与FTVT-b16模型的混合以利用MRI图像增强脑肿瘤分类

Hybrid of VGG-16 and FTVT-b16 Models to Enhance Brain Tumors Classification Using MRI Images.

作者信息

Younis Eman M, Ibrahim Ibrahim A, Mahmoud Mahmoud N, Albarrak Abdullah M

机构信息

Faculty of Computers and Information, Minia University, Minia 61519, Egypt.

College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia.

出版信息

Diagnostics (Basel). 2025 Aug 12;15(16):2014. doi: 10.3390/diagnostics15162014.

Abstract

: The accurate classification of brain tumors from magnetic resonance imaging (MRI) scans is pivotal for timely clinical intervention, yet remains challenged by tumor heterogeneity, morphological variability, and imaging artifacts. : This paper presents a novel hybrid approach for improved brain tumor classification and proposes a novel hybrid deep learning framework that amalgamates the hierarchical feature extraction capabilities of VGG-16, a convolutional neural network (CNN), with the global contextual modeling of FTVT-b16, a fine-tuned vision transformer (ViT), to advance the precision of brain tumor classification. To evaluate the recommended method's efficacy, two widely known MRI datasets were utilized in the experiments. The first dataset consisted of 7.023 MRI scans categorized into four classes gliomas, meningiomas, pituitary tumors, and no tumor. The second dataset was obtained from Kaggle, which consisted of 3000 scans categorized into two classes, consisting of healthy brains and brain tumors. : The proposed framework addresses critical limitations of conventional CNNs (local receptive fields) and pure ViTs (data inefficiency), offering a robust, interpretable solution aligned with clinical workflows. These findings underscore the transformative potential of hybrid architectures in neuro-oncology, paving the way for AI-assisted precision diagnostics. The proposed framework was run on these two different datasets and demonstrated outstanding performance, with accuracy of 99.46% and 99.90%, respectively. : Future work will focus on multi-institutional validation and computational optimization to ensure scalability in diverse clinical settings.

摘要

通过磁共振成像(MRI)扫描对脑肿瘤进行准确分类对于及时的临床干预至关重要,但仍受到肿瘤异质性、形态变异性和成像伪影的挑战。本文提出了一种用于改进脑肿瘤分类的新型混合方法,并提出了一种新型混合深度学习框架,该框架将卷积神经网络(CNN)VGG - 16的分层特征提取能力与微调视觉Transformer(ViT)FTVT - b16的全局上下文建模相结合,以提高脑肿瘤分类的精度。为了评估所推荐方法的有效性,实验中使用了两个广为人知的MRI数据集。第一个数据集由7023次MRI扫描组成,分为四类:胶质瘤、脑膜瘤、垂体瘤和无肿瘤。第二个数据集来自Kaggle,由3000次扫描组成,分为两类,包括健康大脑和脑肿瘤。所提出的框架解决了传统CNN(局部感受野)和纯ViT(数据效率低下)的关键局限性,提供了一种与临床工作流程一致的强大、可解释的解决方案。这些发现强调了混合架构在神经肿瘤学中的变革潜力,为人工智能辅助的精确诊断铺平了道路。所提出的框架在这两个不同的数据集上运行,并表现出出色的性能,准确率分别为99.46%和99.90%。未来的工作将集中在多机构验证和计算优化上,以确保在不同临床环境中的可扩展性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/03a2/12385457/5d682151497f/diagnostics-15-02014-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验