Chen Zhenpeng, Qi Beier, Jing Bin, Dong Ruijuan, Chen Rong, Feng Pujie, Shou Yilu, Li Haiyun
School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical university, Beijing, Beijing, China.
Capital Medical University, Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Beijing, China.
J Alzheimers Dis. 2024 Nov 22:13872877241295287. doi: 10.1177/13872877241295287.
Accurately differentiating stable mild cognitive impairment (sMCI) from progressive MCI (pMCI) is clinically relevant, and identification of pMCI is crucial for timely treatment before it evolves into Alzheimer's disease (AD).
To construct a convolutional neural network (CNN) model to differentiate pMCI from sMCI integrating features from structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images.
We proposed a multi-modal and multi-stage region of interest (ROI)-based fusion network (m2ROI-FN) CNN model to differentiate pMCI from sMCI, adopting a multi-stage fusion strategy to integrate deep semantic features and multiple morphological metrics derived from ROIs of sMRI and PET images. Specifically, ten AD-related ROIs of each modality images were selected as patches inputting into 3D hierarchical CNNs. The deep semantic features extracted by the CNNs were fused through the multi-modal integration module and further combined with the multiple morphological metrics extracted by FreeSurfer. Finally, the multilayer perceptron classifier was utilized for subject-level MCI recognition.
The proposed model achieved accuracy of 77.4% to differentiate pMCI from sMCI with 5-fold cross validation on the entire ADNI database. Further, ADNI-1&2 were formed into an independent sample for model training and validation, and ADNI-3&GO were formed into another independent sample for multi-center testing. The model achieved 73.2% accuracy in distinguishing pMCI and sMCI on ADNI-1&2 and 75% accuracy on ADNI-3&GO.
An effective m2ROI-FN model to distinguish pMCI from sMCI was proposed, which was capable of capturing distinctive features in ROIs of sMRI and PET images. The experimental results demonstrated that the model has the potential to differentiate pMCI from sMCI.
准确区分稳定型轻度认知障碍(sMCI)和进展型轻度认知障碍(pMCI)具有临床意义,而识别pMCI对于在其发展为阿尔茨海默病(AD)之前及时进行治疗至关重要。
构建一个卷积神经网络(CNN)模型,以整合结构磁共振成像(sMRI)和正电子发射断层扫描(PET)图像的特征来区分pMCI和sMCI。
我们提出了一种基于多模态和多阶段感兴趣区域(ROI)的融合网络(m2ROI-FN)CNN模型来区分pMCI和sMCI,采用多阶段融合策略来整合从sMRI和PET图像的ROI中提取的深度语义特征和多个形态学指标。具体而言,每种模态图像的十个与AD相关的ROI被选作输入到3D分层CNN中的补丁。通过多模态整合模块融合CNN提取的深度语义特征,并进一步与FreeSurfer提取的多个形态学指标相结合。最后,利用多层感知器分类器进行个体水平的MCI识别。
在整个ADNI数据库上进行5折交叉验证时,所提出的模型区分pMCI和sMCI的准确率达到77.4%。此外,将ADNI-1&2组成一个独立样本用于模型训练和验证,将ADNI-3&GO组成另一个独立样本用于多中心测试。该模型在ADNI-1&2上区分pMCI和sMCI的准确率为73.2%,在ADNI-3&GO上的准确率为75%。
提出了一种有效的m2ROI-FN模型来区分pMCI和sMCI,该模型能够捕捉sMRI和PET图像ROI中的独特特征。实验结果表明该模型具有区分pMCI和sMCI的潜力。