Suppr超能文献

关系诱导的多模态共享表示学习用于阿尔茨海默病诊断。

Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis.

出版信息

IEEE Trans Med Imaging. 2021 Jun;40(6):1632-1645. doi: 10.1109/TMI.2021.3063150. Epub 2021 Jun 1.

Abstract

The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.

摘要

多模态数据(例如磁共振成像(MRI)和正电子发射断层扫描(PET))的融合已被广泛用于通过提供互补的结构和功能信息来准确识别阿尔茨海默病(AD)。然而,大多数现有的方法只是在原始空间中简单地串联多模态特征,而忽略了它们潜在的关联,这些关联可能为 AD 识别提供更具判别性的特征。同时,如何克服高维多模态数据引起的过拟合问题仍然很有吸引力。为此,我们提出了一种用于 AD 诊断的基于关系的多模态共享表示学习方法。该方法将表示学习、降维和分类器建模集成到一个统一的框架中。具体来说,该框架首先通过学习原始空间和共享空间之间的双向映射来获得多模态共享表示。在这个共享空间中,我们利用几个关系正则化器(包括特征-特征、特征-标签和样本-样本正则化器)和辅助正则化器,分别鼓励学习多模态数据中固有的潜在关联和缓解过拟合。接下来,我们将共享表示投影到目标空间中用于 AD 诊断。为了验证我们提出的方法的有效性,我们在两个独立的数据集(即 ADNI-1 和 ADNI-2)上进行了广泛的实验,实验结果表明我们提出的方法优于几种最先进的方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验