Ouyang Jiahong, Zhao Qingyu, Adeli Ehsan, Zaharchuk Greg, Pohl Kilian M
Stanford University, Stanford CA 94305, USA.
Cornell University, Ithaca NY 14850, USA.
Med Image Comput Comput Assist Interv. 2024 Oct;15002:400-410. doi: 10.1007/978-3-031-72069-7_38. Epub 2024 Oct 4.
Neuroimage modalities acquired by longitudinal studies often provide complementary information regarding disease progression. For example, amyloid PET visualizes the build-up of amyloid plaques that appear in earlier stages of Alzheimer's disease (AD), while structural MRIs depict brain atrophy appearing in the later stages of the disease. To accurately model multi-modal longitudinal data, we propose an interpretable self-supervised model called Self-Organized Multi-Modal Longitudinal Maps (SOM2LM). SOM2LM encodes each modality as a 2D self-organizing map (SOM) so that one dimension of each modality-specific SOMs corresponds to disease abnormality. The model also regularizes across modalities to depict their temporal order of capturing abnormality. When applied to longitudinal T1w MRIs and amyloid PET of the Alzheimer's Disease Neuroimaging Initiative (ADNI, =741), SOM2LM generates interpretable latent spaces that characterize disease abnormality. When compared to state-of-art models, it achieves higher accuracy for the downstream tasks of cross-modality prediction of amyloid status from T1w-MRI and joint-modality prediction of individuals with mild cognitive impairment converting to AD using both MRI and amyloid PET. The code is available at https://github.com/ouyangjiahong/longitudinal-som-multi-modality.
纵向研究获取的神经影像模态通常会提供有关疾病进展的补充信息。例如,淀粉样蛋白PET可显示出现在阿尔茨海默病(AD)早期阶段的淀粉样斑块的积累情况,而结构MRI则描绘出该疾病后期出现的脑萎缩。为了准确地对多模态纵向数据进行建模,我们提出了一种名为自组织多模态纵向映射(SOM2LM)的可解释自监督模型。SOM2LM将每种模态编码为二维自组织映射(SOM),使得每个特定模态的SOM的一个维度对应于疾病异常情况。该模型还会跨模态进行正则化,以描绘它们捕捉异常情况的时间顺序。当应用于阿尔茨海默病神经影像倡议(ADNI,n = 741)的纵向T1加权MRI和淀粉样蛋白PET时,SOM2LM生成了表征疾病异常情况的可解释潜在空间。与现有模型相比,它在从T1加权MRI进行淀粉样蛋白状态的跨模态预测以及使用MRI和淀粉样蛋白PET对轻度认知障碍转化为AD的个体进行联合模态预测的下游任务中实现了更高的准确率。代码可在https://github.com/ouyangjiahong/longitudinal-som-multi-modality获取。