Suppr超能文献

基于多模态特征融合的图卷积网络用于使用F-18氟贝他班脑PET图像和临床指标的阿尔茨海默病阶段分类

Multimodal feature fusion-based graph convolutional networks for Alzheimer's disease stage classification using F-18 florbetaben brain PET images and clinical indicators.

作者信息

Lee Gyu-Bin, Jeong Young-Jin, Kang Do-Young, Yun Hyun-Jin, Yoon Min

机构信息

Department of Nuclear Medicine, Dong-A University College of Medicine and Medical Center, Busan, Korea.

Department of Applied Mathematics, Pukyong National University, Busan, Korea.

出版信息

PLoS One. 2024 Dec 23;19(12):e0315809. doi: 10.1371/journal.pone.0315809. eCollection 2024.

Abstract

Alzheimer's disease (AD), the most prevalent degenerative brain disease associated with dementia, requires early diagnosis to alleviate worsening of symptoms through appropriate management and treatment. Recent studies on AD stage classification are increasingly using multimodal data. However, few studies have applied graph neural networks to multimodal data comprising F-18 florbetaben (FBB) amyloid brain positron emission tomography (PET) images and clinical indicators. The objective of this study was to demonstrate the effectiveness of graph convolutional network (GCN) for AD stage classification using multimodal data, specifically FBB PET images and clinical indicators, collected from Dong-A University Hospital (DAUH) and Alzheimer's Disease Neuroimaging Initiative (ADNI). The effectiveness of GCN was demonstrated through comparisons with the support vector machine, random forest, and multilayer perceptron across four classification tasks (normal control (NC) vs. AD, NC vs. mild cognitive impairment (MCI), MCI vs. AD, and NC vs. MCI vs. AD). As input, all models received the same combined feature vectors, created by concatenating the PET imaging feature vectors extracted by the 3D dense convolutional network and non-imaging feature vectors consisting of clinical indicators using multimodal feature fusion method. An adjacency matrix for the population graph was constructed using cosine similarity or the Euclidean distance between subjects' PET imaging feature vectors and/or non-imaging feature vectors. The usage ratio of these different modal data and edge assignment threshold were tuned by setting them as hyperparameters. In this study, GCN-CS-com and GCN-ED-com were the GCN models that received the adjacency matrix constructed using cosine similarity (CS) and the Euclidean distance (ED) between the subjects' PET imaging feature vectors and non-imaging feature vectors, respectively. In modified nested cross validation, GCN-CS-com and GCN-ED-com respectively achieved average test accuracies of 98.40%, 94.58%, 94.01%, 82.63% and 99.68%, 93.82%, 93.88%, 90.43% for the four aforementioned classification tasks using DAUH dataset, outperforming the other models. Furthermore, GCN-CS-com and GCN-ED-com respectively achieved average test accuracies of 76.16% and 90.11% for NC vs. MCI vs. AD classification using ADNI dataset, outperforming the other models. These results demonstrate that GCN could be an effective model for AD stage classification using multimodal data.

摘要

阿尔茨海默病(AD)是最常见的与痴呆相关的退行性脑疾病,需要早期诊断以通过适当的管理和治疗减轻症状恶化。最近关于AD阶段分类的研究越来越多地使用多模态数据。然而,很少有研究将图神经网络应用于包含F-18氟贝他班(FBB)淀粉样蛋白脑正电子发射断层扫描(PET)图像和临床指标的多模态数据。本研究的目的是证明图卷积网络(GCN)在使用从东国大学医院(DAUH)和阿尔茨海默病神经影像倡议(ADNI)收集的多模态数据(特别是FBB PET图像和临床指标)进行AD阶段分类方面的有效性。通过在四个分类任务(正常对照(NC)与AD、NC与轻度认知障碍(MCI)、MCI与AD以及NC与MCI与AD)中与支持向量机、随机森林和多层感知器进行比较,证明了GCN的有效性。作为输入,所有模型都接收相同的组合特征向量,该向量是通过使用多模态特征融合方法将由3D密集卷积网络提取的PET成像特征向量与由临床指标组成的非成像特征向量连接而创建的。使用受试者的PET成像特征向量和/或非成像特征向量之间的余弦相似度或欧几里得距离构建群体图的邻接矩阵。通过将这些不同模态数据的使用比例和边分配阈值设置为超参数进行调整。在本研究中,GCN-CS-com和GCN-ED-com是分别接收使用受试者的PET成像特征向量和非成像特征向量之间的余弦相似度(CS)和欧几里得距离(ED)构建的邻接矩阵的GCN模型。在改进的嵌套交叉验证中,使用DAUH数据集时,GCN-CS-com和GCN-ED-com在上述四个分类任务中分别实现了98.40%、94.58%、94.01%、82.63%和99.68%、93.82%、93.88%、90.43%的平均测试准确率,优于其他模型。此外,使用ADNI数据集进行NC与MCI与AD分类时,GCN-CS-com和GCN-ED-com分别实现了76.16%和90.11%的平均测试准确率,优于其他模型。这些结果表明,GCN可能是使用多模态数据进行AD阶段分类的有效模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/25ba/11666044/da3a5160c32b/pone.0315809.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验