Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina.
Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
Hum Brain Mapp. 2019 Feb 15;40(3):1001-1016. doi: 10.1002/hbm.24428. Epub 2018 Nov 1.
In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.
在本文中,作者旨在最大程度地利用多模态神经影像学和遗传数据来识别阿尔茨海默病(AD)及其前驱状态轻度认知障碍(MCI)从正常衰老的受试者。多模态神经影像学数据(如 MRI 和 PET)提供了有关大脑异常的有价值的见解,而遗传数据(如单核苷酸多态性(SNP))提供了有关患者 AD 风险因素的信息。当将这些数据一起使用时,AD 的诊断准确性可能会提高。但是,这些数据具有异质性(例如,具有不同的数据分布),并且样本数量不同(例如,PET 样本数量远远少于 MRI 或 SNPs 的数量)。因此,使用这些数据学习有效的模型具有挑战性。为此,我们提出了一种新颖的三阶段深度特征学习和融合框架,其中神经网络分阶段进行训练。网络的每个阶段都通过使用可用的最大样本数进行有效的训练,学习不同模态组合的特征表示。具体来说,在第一阶段,我们独立地学习每个模态的潜在表示(即高级特征),以便部分解决模态之间的异质性,并可以在下一阶段将不同模态的高级特征组合在一起。在第二阶段,我们通过使用第一阶段学到的高级特征来学习每对模态组合的联合潜在特征。在第三阶段,我们通过融合第二阶段学到的联合潜在特征来学习诊断标签。为了在训练过程中进一步增加样本数量,我们还在数据集的每个训练对象中使用多个扫描时间点的数据。我们使用阿尔茨海默病神经影像学倡议(ADNI)数据集评估了所提出的框架用于 AD 诊断的效果,实验结果表明所提出的框架优于其他最先进的方法。
Mach Learn Med Imaging. 2017-9
IEEE Trans Med Imaging. 2019-4-25
IEEE J Biomed Health Inform. 2017-1-19
Med Image Comput Comput Assist Interv. 2016-10
Front Aging Neurosci. 2025-3-21
Front Big Data. 2025-2-20
Alzheimers Dement. 2025-1
IISE Trans Healthc Syst Eng. 2024
Proc AAAI Conf Artif Intell. 2018-2
IEEE Trans Biomed Eng. 2018-4-9
Mach Learn Med Imaging. 2017-9
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2017). 2017-9
Med Image Anal. 2017-7-26
Med Image Comput Comput Assist Interv. 2016-10