Suppr超能文献

使用包含交互效应的多模态深度学习模型预测阿尔茨海默病的长期进展。

Predicting long-term progression of Alzheimer's disease using a multimodal deep learning model incorporating interaction effects.

作者信息

Wang Yifan, Gao Ruitian, Wei Ting, Johnston Luke, Yuan Xin, Zhang Yue, Yu Zhangsheng

机构信息

Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China.

SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China.

出版信息

J Transl Med. 2024 Mar 11;22(1):265. doi: 10.1186/s12967-024-05025-w.

Abstract

BACKGROUND

Identifying individuals with mild cognitive impairment (MCI) at risk of progressing to Alzheimer's disease (AD) provides a unique opportunity for early interventions. Therefore, accurate and long-term prediction of the conversion from MCI to AD is desired but, to date, remains challenging. Here, we developed an interpretable deep learning model featuring a novel design that incorporates interaction effects and multimodality to improve the prediction accuracy and horizon for MCI-to-AD progression.

METHODS

This multi-center, multi-cohort retrospective study collected structural magnetic resonance imaging (sMRI), clinical assessments, and genetic polymorphism data of 252 patients with MCI at baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our deep learning model was cross-validated on the ADNI-1 and ADNI-2/GO cohorts and further generalized in the ongoing ADNI-3 cohort. We evaluated the model performance using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score.

RESULTS

On the cross-validation set, our model achieved superior results for predicting MCI conversion within 4 years (AUC, 0.962; accuracy, 92.92%; sensitivity, 88.89%; specificity, 95.33%) compared to all existing studies. In the independent test, our model exhibited consistent performance with an AUC of 0.939 and an accuracy of 92.86%. Integrating interaction effects and multimodal data into the model significantly increased prediction accuracy by 4.76% (P = 0.01) and 4.29% (P = 0.03), respectively. Furthermore, our model demonstrated robustness to inter-center and inter-scanner variability, while generating interpretable predictions by quantifying the contribution of multimodal biomarkers.

CONCLUSIONS

The proposed deep learning model presents a novel perspective by combining interaction effects and multimodality, leading to more accurate and longer-term predictions of AD progression, which promises to improve pre-dementia patient care.

摘要

背景

识别有进展为阿尔茨海默病(AD)风险的轻度认知障碍(MCI)个体为早期干预提供了独特机会。因此,准确且长期预测从MCI向AD的转化是人们所期望的,但迄今为止仍然具有挑战性。在此,我们开发了一种具有可解释性的深度学习模型,其具有一种新颖的设计,纳入了交互效应和多模态信息,以提高对MCI向AD进展的预测准确性和预测期限。

方法

这项多中心、多队列回顾性研究从阿尔茨海默病神经影像倡议(ADNI)数据库收集了252例基线期MCI患者的结构磁共振成像(sMRI)、临床评估和基因多态性数据。我们的深度学习模型在ADNI - 1和ADNI - 2/GO队列上进行交叉验证,并在正在进行的ADNI - 3队列中进一步进行泛化验证。我们使用受试者操作特征曲线下面积(AUC)、准确率、灵敏度、特异性和F1分数来评估模型性能。

结果

在交叉验证集上,与所有现有研究相比,我们的模型在预测4年内MCI转化方面取得了优异结果(AUC为0.962;准确率为92.92%;灵敏度为88.89%;特异性为95.33%)。在独立测试中,我们的模型表现出一致的性能,AUC为0.939,准确率为92.86%。将交互效应和多模态数据整合到模型中分别使预测准确性显著提高了4.76%(P = 0.01)和4.29%(P = 0.03)。此外,我们的模型对中心间和扫描仪间的变异性具有鲁棒性,同时通过量化多模态生物标志物的贡献生成可解释的预测结果。

结论

所提出的深度学习模型通过结合交互效应和多模态信息展现了一种新颖的视角,从而对AD进展做出更准确和更长期的预测,这有望改善痴呆前期患者的护理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/167c/10926590/cc53d19e4d77/12967_2024_5025_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验