Suppr超能文献

用于多模态脑磁共振成像分析的表示解缠

Representation Disentanglement for Multi-modal Brain MRI Analysis.

作者信息

Ouyang Jiahong, Adeli Ehsan, Pohl Kilian M, Zhao Qingyu, Zaharchuk Greg

机构信息

Stanford University, Stanford, CA.

SRI International, Menlo Park, CA.

出版信息

Inf Process Med Imaging. 2021 Jun;12729:321-333. doi: 10.1007/978-3-030-78191-0_25. Epub 2021 Jun 14.

Abstract

Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.

摘要

多模态磁共振成像(MRI)在神经成像应用中被广泛使用,因为不同的MR序列能提供有关脑结构的互补信息。最近的研究表明,多模态深度学习分析可以通过将解剖学(形状)和模态(外观)信息明确解耦到单独的图像表示中而受益。在这项工作中,我们对主流策略提出了挑战,表明它们在理论和实践中都不会自然地导致表示解耦。为了解决这个问题,我们提出了一种边缘损失,用于规范跨受试者和模态的表示关系中的相似性。为了实现稳健的训练,我们进一步使用条件卷积来设计一个单一模型,用于对所有模态的图像进行编码。最后,我们提出了一种融合函数,将解耦的解剖学表示组合成一组模态不变特征,用于下游任务。我们在三个多模态神经成像数据集上评估了所提出的方法。实验表明,与现有的解耦策略相比,我们提出的方法可以实现更好的解耦表示。结果还表明,融合的解剖学表示在零剂量PET重建和脑肿瘤分割的下游任务中具有潜力。

相似文献

1
Representation Disentanglement for Multi-modal Brain MRI Analysis.用于多模态脑磁共振成像分析的表示解缠
Inf Process Med Imaging. 2021 Jun;12729:321-333. doi: 10.1007/978-3-030-78191-0_25. Epub 2021 Jun 14.
3
A Disentangled Representation Based Brain Image Fusion Group Lasso Penalty.基于解缠表示的脑图像融合组套索惩罚。
Front Neurosci. 2022 Jul 18;16:937861. doi: 10.3389/fnins.2022.937861. eCollection 2022.
5
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis.Hi-Net:用于多模态磁共振图像合成的混合融合网络。
IEEE Trans Med Imaging. 2020 Sep;39(9):2772-2781. doi: 10.1109/TMI.2020.2975344. Epub 2020 Feb 20.

引用本文的文献

6
Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model.通过条件扩散概率模型生成逼真的脑部磁共振成像。
Med Image Comput Comput Assist Interv. 2023 Oct;14227:14-24. doi: 10.1007/978-3-031-43993-3_2. Epub 2023 Oct 1.
7
HACA3: A unified approach for multi-site MR image harmonization.HACA3:一种多站点磁共振图像匀场的统一方法。
Comput Med Imaging Graph. 2023 Oct;109:102285. doi: 10.1016/j.compmedimag.2023.102285. Epub 2023 Aug 14.

本文引用的文献

2
Multi-Domain Image Completion for Random Missing Input Data.多领域图像补全随机缺失输入数据。
IEEE Trans Med Imaging. 2021 Apr;40(4):1113-1122. doi: 10.1109/TMI.2020.3046444. Epub 2021 Apr 1.
4
Confounder-Aware Visualization of ConvNets.卷积神经网络的混杂因素感知可视化
Mach Learn Med Imaging. 2019 Oct;11861:328-336. doi: 10.1007/978-3-030-32692-0_38. Epub 2019 Oct 10.
5
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).多模态脑肿瘤图像分割基准(BRATS)。
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验