IEEE Trans Med Imaging. 2019 Oct;38(10):2411-2422. doi: 10.1109/TMI.2019.2913158. Epub 2019 Apr 25.
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
多模态数据(例如磁共振成像(MRI)、正电子发射断层扫描(PET)和遗传数据)中包含的互补信息的融合促进了自动化阿尔茨海默病(AD)诊断的进展。然而,基于多模态的 AD 诊断模型通常受到缺失数据的阻碍,即并非所有受试者都具有完整的多模态数据。许多先前研究使用的一个简单解决方案是丢弃具有缺失模态的样本。然而,这会显著减少训练样本的数量,从而导致分类模型效果不佳。此外,在构建分类模型时,大多数现有方法只是将来自不同模态的特征简单地串联成单个特征向量,而不考虑它们的潜在关联。由于来自不同模态的特征通常密切相关(例如,MRI 和 PET 特征是从同一大脑区域提取的),利用它们的模态间关联可以提高诊断模型的稳健性。为此,我们提出了一种新颖的基于潜在表示学习的多模态 AD 诊断方法。具体来说,我们使用所有可用的样本(包括具有不完整模态数据的样本)来学习潜在表示空间。在这个空间中,我们不仅使用具有完整多模态数据的样本学习共同的潜在表示,还使用具有不完整多模态数据的样本学习独立的模态特定的潜在表示。然后,我们将潜在表示投影到标签空间以进行 AD 诊断。我们使用来自阿尔茨海默病神经影像学倡议(ADNI)数据库的 737 名受试者进行实验,实验结果验证了我们提出的方法的有效性。
IEEE Trans Med Imaging. 2019-4-25
Comput Biol Med. 2022-11
Neuroinformatics. 2018-10
Comput Methods Programs Biomed. 2015-8-10
Comput Med Imaging Graph. 2020-3
IEEE/ACM Trans Comput Biol Bioinform. 2024
Med Image Comput Comput Assist Interv. 2016-10
Front Neurosci. 2025-5-19
Sci Adv. 2024-12-20
IEEE J Biomed Health Inform. 2025-1
Nan Fang Yi Ke Da Xue Xue Bao. 2024-8-20
IISE Trans Healthc Syst Eng. 2024
Med Image Anal. 2019-5
Predict Intell Med. 2018-9
Med Image Comput Comput Assist Interv. 2018-9
IEEE Trans Cybern. 2018-6-14
IEEE Trans Biomed Eng. 2018-4-9
IEEE Trans Cybern. 2017-9-12