College of Electronics and Information Engineering, Sichuan University, Chengdu, 610065, China; School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
College of Electronics and Information Engineering, Sichuan University, Chengdu, 610065, China.
Comput Biol Med. 2023 Sep;164:107328. doi: 10.1016/j.compbiomed.2023.107328. Epub 2023 Aug 7.
In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks are created from sMRI or PET images and can be used in a population graph framework that combines phenotypic information with imaging features of the brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique that combines the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis. Our study also provides technical reference and support the need for multivariate multi-modal diagnosis methods.
近年来,深度学习模型已被应用于神经影像学数据,以实现阿尔茨海默病(AD)的早期诊断。结构磁共振成像(sMRI)和正电子发射断层扫描(PET)图像分别提供了大脑的结构和功能信息。在构建 AD 诊断的预测模型时,将这些特征结合起来比单独使用单一模态的性能更好。然而,目前深度学习中的基于 sMRI 和 PET 的多模态方法主要局限于卷积神经网络,这不利于整合图像和受试者的表型信息。我们建议使用图神经网络(GNN),它是专门为处理非欧几里得域中的问题而设计的。在这项研究中,我们展示了如何从 sMRI 或 PET 图像中创建脑网络,并可以在一个结合脑网络的表型信息和成像特征的群体图框架中使用。然后,我们提出了一个多模态 GNN 框架,其中每个模态都有自己的 GNN 分支和一种在节点向量和邻接矩阵级别上组合多模态数据的技术。最后,我们进行后期融合,将每个分支中的初步决策进行组合,并做出最终预测。随着多模态数据的出现,多源多模态是 AD 诊断的趋势。我们基于多模态成像数据和非成像表型信息进行了探索性实验,以进行 AD 诊断,并分析了表型信息对诊断性能的影响。实验结果表明,我们提出的多模态方法提高了 AD 诊断的性能。我们的研究还为多变量多模态诊断方法提供了技术参考和支持。