Li Yuhan, Niu Donghao, Qi Keying, Liang Dong, Long Xiaojing
Research Centers for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China.
Front Aging Neurosci. 2025 Mar 21;17:1532470. doi: 10.3389/fnagi.2025.1532470. eCollection 2025.
Conventional computer-aided diagnostic techniques for Alzheimer's disease (AD) predominantly rely on magnetic resonance imaging (MRI) in isolation. Genetic imaging methods, by establishing the link between genes and brain structures in disease progression, facilitate early prediction of AD development. While deep learning methods based on MRI have demonstrated promising results for early AD diagnosis, the limited dataset size has led most AD studies to lean on statistical approaches within the realm of imaging genetics. Existing deep-learning approaches typically utilize pre-defined regions of interest and risk variants from known susceptibility genes, employing relatively straightforward feature fusion methods that fail to fully capture the relationship between images and genes. To address these limitations, we proposed a multi-modal deep learning classification network based on MRI and single nucleotide polymorphism (SNP) data for AD diagnosis and mild cognitive impairment (MCI) progression prediction. Our model leveraged a convolutional neural network (CNN) to extract whole-brain structural features, a Transformer network to capture genetic features, and employed a cross-transformer-based network for comprehensive feature fusion. Furthermore, we incorporated an attention-map-based interpretability method to analyze and elucidate the structural and risk variants associated with AD and their interrelationships. The proposed model was trained and evaluated using 1,541 subjects from the ADNI database. Experimental results underscored the superior performance of our model in effectively integrating and leveraging information from both modalities, thus enhancing the accuracy of AD diagnosis and prediction.
用于阿尔茨海默病(AD)的传统计算机辅助诊断技术主要单独依赖磁共振成像(MRI)。基因成像方法通过在疾病进展过程中建立基因与脑结构之间的联系,有助于早期预测AD的发展。虽然基于MRI的深度学习方法在早期AD诊断方面已显示出有前景的结果,但数据集规模有限导致大多数AD研究依赖于成像遗传学领域内的统计方法。现有的深度学习方法通常利用预定义的感兴趣区域和已知易感基因的风险变异,采用相对简单的特征融合方法,无法充分捕捉图像与基因之间的关系。为了解决这些局限性,我们提出了一种基于MRI和单核苷酸多态性(SNP)数据的多模态深度学习分类网络,用于AD诊断和轻度认知障碍(MCI)进展预测。我们的模型利用卷积神经网络(CNN)提取全脑结构特征,利用Transformer网络捕捉遗传特征,并采用基于交叉Transformer的网络进行综合特征融合。此外,我们纳入了一种基于注意力图的可解释性方法,以分析和阐明与AD相关的结构和风险变异及其相互关系。使用来自ADNI数据库的1541名受试者对所提出的模型进行了训练和评估。实验结果强调了我们的模型在有效整合和利用两种模态信息方面的卓越性能,从而提高了AD诊断和预测的准确性。