Suppr超能文献

基于联合自监督和监督对比学习的多模态 MRI 数据研究:预测异常神经发育

Joint self-supervised and supervised contrastive learning for multimodal MRI data: Towards predicting abnormal neurodevelopment.

机构信息

Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Computer Science, University of Cincinnati, Cincinnati, OH, USA.

Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.

出版信息

Artif Intell Med. 2024 Nov;157:102993. doi: 10.1016/j.artmed.2024.102993. Epub 2024 Sep 30.

Abstract

The integration of different imaging modalities, such as structural, diffusion tensor, and functional magnetic resonance imaging, with deep learning models has yielded promising outcomes in discerning phenotypic characteristics and enhancing disease diagnosis. The development of such a technique hinges on the efficient fusion of heterogeneous multimodal features, which initially reside within distinct representation spaces. Naively fusing the multimodal features does not adequately capture the complementary information and could even produce redundancy. In this work, we present a novel joint self-supervised and supervised contrastive learning method to learn the robust latent feature representation from multimodal MRI data, allowing the projection of heterogeneous features into a shared common space, and thereby amalgamating both complementary and analogous information across various modalities and among similar subjects. We performed a comparative analysis between our proposed method and alternative deep multimodal learning approaches. Through extensive experiments on two independent datasets, the results demonstrated that our method is significantly superior to several other deep multimodal learning methods in predicting abnormal neurodevelopment. Our method has the capability to facilitate computer-aided diagnosis within clinical practice, harnessing the power of multimodal data. The source code of the proposed model is publicly accessible on GitHub: https://github.com/leonzyzy/Contrastive-Network.

摘要

不同成像模式(如结构、扩散张量和功能磁共振成像)与深度学习模型的融合,在辨别表型特征和增强疾病诊断方面取得了有前景的成果。这种技术的发展取决于异构多模态特征的有效融合,这些特征最初存在于不同的表示空间中。简单地融合多模态特征并不能充分捕捉互补信息,甚至可能产生冗余。在这项工作中,我们提出了一种新的联合自监督和监督对比学习方法,从多模态 MRI 数据中学习稳健的潜在特征表示,允许将异构特征投影到共享的公共空间中,从而融合来自不同模态和相似对象的互补和类似信息。我们在两个独立的数据集上对我们提出的方法和其他几种深度学习多模态学习方法进行了比较分析。通过广泛的实验,结果表明,我们的方法在预测异常神经发育方面明显优于其他几种深度学习多模态学习方法。我们的方法具有在临床实践中辅助计算机辅助诊断的能力,利用多模态数据的力量。所提出模型的源代码可在 GitHub 上公开获取:https://github.com/leonzyzy/Contrastive-Network。

相似文献

本文引用的文献

9
Cross-modal attention for multi-modal image registration.跨模态注意的多模态图像配准。
Med Image Anal. 2022 Nov;82:102612. doi: 10.1016/j.media.2022.102612. Epub 2022 Sep 10.
10
Multimodal Triplet Attention Network for Brain Disease Diagnosis.多模态三重注意网络用于脑部疾病诊断。
IEEE Trans Med Imaging. 2022 Dec;41(12):3884-3894. doi: 10.1109/TMI.2022.3199032. Epub 2022 Dec 2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验