Muhammad Ahmad, Jin Qi, Elwasila Osman, Gulzar Yonis
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia.
Brain Sci. 2025 Jun 6;15(6):612. doi: 10.3390/brainsci15060612.
BACKGROUND/OBJECTIVES: Alzheimer's disease (AD), a progressive neurodegenerative disorder, demands precise early diagnosis to enable timely interventions. Traditional convolutional neural networks (CNNs) and deep learning models often fail to effectively integrate localized brain changes with global connectivity patterns, limiting their efficacy in Alzheimer's disease (AD) classification.
This research proposes a novel deep learning framework for multi-stage Alzheimer's disease (AD) classification using T1-weighted MRI scans. The adaptive feature fusion layer, a pivotal advancement, facilitates the dynamic integration of features extracted from a ResNet50-based CNN and a vision transformer (ViT). Unlike static fusion methods, our adaptive feature fusion layer employs an attention mechanism to dynamically integrate ResNet50's localized structural features and vision transformer (ViT) global connectivity patterns, significantly enhancing stage-specific Alzheimer's disease classification accuracy.
Evaluated on the Alzheimer's 5-Class (AD5C) dataset comprising 2380 MRI scans, the framework achieves an accuracy of 99.42% (precision: 99.55%; recall: 99.46%; F1-score: 99.50%), surpassing the prior benchmark of 98.24% by 1.18%. Ablation studies underscore the essential role of adaptive feature fusion in minimizing misclassifications, while external validation on a four-class dataset confirms robust generalizability.
This framework enables precise early Alzheimer's disease (AD) diagnosis by integrating multi-scale neuroimaging features, empowering clinicians to optimize patient care through timely and targeted interventions.
背景/目的:阿尔茨海默病(AD)是一种进行性神经退行性疾病,需要精确的早期诊断以便及时进行干预。传统的卷积神经网络(CNN)和深度学习模型常常无法有效地将局部脑区变化与全局连通性模式整合起来,限制了它们在阿尔茨海默病(AD)分类中的功效。
本研究提出了一种用于基于T1加权磁共振成像(MRI)扫描进行多阶段阿尔茨海默病(AD)分类的新型深度学习框架。自适应特征融合层是一项关键进展,它有助于动态整合从基于ResNet50的CNN和视觉Transformer(ViT)中提取的特征。与静态融合方法不同,我们的自适应特征融合层采用注意力机制来动态整合ResNet50的局部结构特征和视觉Transformer(ViT)的全局连通性模式,显著提高了特定阶段的阿尔茨海默病分类准确率。
在包含2380次MRI扫描的阿尔茨海默病5类(AD5C)数据集上进行评估,该框架的准确率达到99.42%(精确率:99.55%;召回率:99.46%;F1分数:99.50%),比之前98.24%的基准高出1.18%。消融研究强调了自适应特征融合在最小化错误分类方面的关键作用,而在四类数据集上的外部验证证实了其强大的通用性。
该框架通过整合多尺度神经影像特征实现了阿尔茨海默病(AD)的精确早期诊断,使临床医生能够通过及时且有针对性的干预来优化患者护理。