Suppr超能文献

SOM2LM:自组织多模态纵向映射图

SOM2LM: Self-Organized Multi-Modal Longitudinal Maps.

作者信息

Ouyang Jiahong, Zhao Qingyu, Adeli Ehsan, Zaharchuk Greg, Pohl Kilian M

机构信息

Stanford University, Stanford CA 94305, USA.

Cornell University, Ithaca NY 14850, USA.

出版信息

Med Image Comput Comput Assist Interv. 2024 Oct;15002:400-410. doi: 10.1007/978-3-031-72069-7_38. Epub 2024 Oct 4.

Abstract

Neuroimage modalities acquired by longitudinal studies often provide complementary information regarding disease progression. For example, amyloid PET visualizes the build-up of amyloid plaques that appear in earlier stages of Alzheimer's disease (AD), while structural MRIs depict brain atrophy appearing in the later stages of the disease. To accurately model multi-modal longitudinal data, we propose an interpretable self-supervised model called Self-Organized Multi-Modal Longitudinal Maps (SOM2LM). SOM2LM encodes each modality as a 2D self-organizing map (SOM) so that one dimension of each modality-specific SOMs corresponds to disease abnormality. The model also regularizes across modalities to depict their temporal order of capturing abnormality. When applied to longitudinal T1w MRIs and amyloid PET of the Alzheimer's Disease Neuroimaging Initiative (ADNI, =741), SOM2LM generates interpretable latent spaces that characterize disease abnormality. When compared to state-of-art models, it achieves higher accuracy for the downstream tasks of cross-modality prediction of amyloid status from T1w-MRI and joint-modality prediction of individuals with mild cognitive impairment converting to AD using both MRI and amyloid PET. The code is available at https://github.com/ouyangjiahong/longitudinal-som-multi-modality.

摘要

纵向研究获取的神经影像模态通常会提供有关疾病进展的补充信息。例如,淀粉样蛋白PET可显示出现在阿尔茨海默病(AD)早期阶段的淀粉样斑块的积累情况,而结构MRI则描绘出该疾病后期出现的脑萎缩。为了准确地对多模态纵向数据进行建模,我们提出了一种名为自组织多模态纵向映射(SOM2LM)的可解释自监督模型。SOM2LM将每种模态编码为二维自组织映射(SOM),使得每个特定模态的SOM的一个维度对应于疾病异常情况。该模型还会跨模态进行正则化,以描绘它们捕捉异常情况的时间顺序。当应用于阿尔茨海默病神经影像倡议(ADNI,n = 741)的纵向T1加权MRI和淀粉样蛋白PET时,SOM2LM生成了表征疾病异常情况的可解释潜在空间。与现有模型相比,它在从T1加权MRI进行淀粉样蛋白状态的跨模态预测以及使用MRI和淀粉样蛋白PET对轻度认知障碍转化为AD的个体进行联合模态预测的下游任务中实现了更高的准确率。代码可在https://github.com/ouyangjiahong/longitudinal-som-multi-modality获取。

相似文献

1
SOM2LM: Self-Organized Multi-Modal Longitudinal Maps.SOM2LM:自组织多模态纵向映射图
Med Image Comput Comput Assist Interv. 2024 Oct;15002:400-410. doi: 10.1007/978-3-031-72069-7_38. Epub 2024 Oct 4.

本文引用的文献

1
LSOR: Longitudinally-Consistent Self-Organized Representation Learning.LSOR:纵向一致的自组织表征学习
Med Image Comput Comput Assist Interv. 2023 Oct;14220:279-289. doi: 10.1007/978-3-031-43907-0_27. Epub 2023 Oct 1.
2
Self-Supervised Longitudinal Neighbourhood Embedding.自监督纵向邻域嵌入
Med Image Comput Comput Assist Interv. 2021 Sep-Oct;12902:80-89. doi: 10.1007/978-3-030-87196-3_8. Epub 2021 Sep 21.
3
Longitudinal Correlation Analysis for Decoding Multi-modal Brain Development.用于解码多模态脑发育的纵向相关性分析
Med Image Comput Comput Assist Interv. 2021 Sep-Oct;12907:400-409. doi: 10.1007/978-3-030-87234-2_38. Epub 2021 Sep 21.
4
Longitudinal self-supervised learning.纵向自我监督学习。
Med Image Anal. 2021 Jul;71:102051. doi: 10.1016/j.media.2021.102051. Epub 2021 Apr 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验