Suppr超能文献

解缠、对齐和融合用于多模态和半监督图像分割。

Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation.

出版信息

IEEE Trans Med Imaging. 2021 Mar;40(3):781-792. doi: 10.1109/TMI.2020.3036584. Epub 2021 Mar 2.

Abstract

Magnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ's anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation. Code is available at https://github.com/vios-s/multimodal_segmentation.

摘要

磁共振(MR)协议依赖于多个序列来正确评估病理和器官状态。尽管图像分析取得了进步,但我们往往会孤立地对待每个序列,这里称为模态。利用模态之间(器官解剖结构)共享的常见信息,有利于多模态处理和学习。然而,我们必须克服固有解剖配准和模态之间信号强度的差异,才能获得这种益处。我们提出了一种方法,通过学习利用其他模态中的信息(即使对于特定模态,可用的注释很少(半监督)或没有(无监督)),来提高感兴趣模态的分割准确性(相对于单个输入模型)。我们的方法的核心是学习解耦分解成解剖和成像因素。从不同输入中共享的解剖因素被共同处理和融合,以提取更准确的分割掩模。图像配准错误通过空间变换网络进行校正,该网络对解剖因素进行非线性对齐。成像因素捕获不同模态数据中的信号强度特征,并用于图像重建,从而实现半监督学习。输入之间的时间和切片配对是动态学习的。我们展示了在晚期钆增强(LGE)和血氧水平依赖(BOLD)心脏分割以及 T2 腹部分割中的应用。代码可在 https://github.com/vios-s/multimodal_segmentation 获得。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c6a/8011298/2e75f2b39698/nihms-1679372-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验