Institute for Digital Communications, School of Engineering, University of Edinburgh, West Mains Rd, Edinburgh EH9 3FB, UK.
Institute for Digital Communications, School of Engineering, University of Edinburgh, West Mains Rd, Edinburgh EH9 3FB, UK.
Med Image Anal. 2019 Dec;58:101535. doi: 10.1016/j.media.2019.101535. Epub 2019 Jul 18.
Typically, a medical image offers spatial information on the anatomy (and pathology) modulated by imaging specific characteristics. Many imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be interpreted in this way. We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics. Here, we explicitly learn this decomposed (disentangled) representation of imaging data, focusing in particular on cardiac images. We propose Spatial Decomposition Network (SDNet), which factorises 2D medical images into spatial anatomical factors and non-spatial modality factors. We demonstrate that this high-level representation is ideally suited for several medical image analysis tasks, such as semi-supervised segmentation, multi-task segmentation and regression, and image-to-image synthesis. Specifically, we show that our model can match the performance of fully supervised segmentation models, using only a fraction of the labelled images. Critically, we show that our factorised representation also benefits from supervision obtained either when we use auxiliary tasks to train the model in a multi-task setting (e.g. regressing to known cardiac indices), or when aggregating multimodal data from different sources (e.g. pooling together MRI and CT data). To explore the properties of the learned factorisation, we perform latent-space arithmetic and show that we can synthesise CT from MR and vice versa, by swapping the modality factors. We also demonstrate that the factor holding image specific information can be used to predict the input modality with high accuracy. Code will be made available at https://github.com/agis85/anatomy_modality_decomposition.
通常,医学图像提供了受成像特定特征调制的解剖结构(和病理学)的空间信息。许多成像方式,包括磁共振成像(MRI)和计算机断层扫描(CT),都可以通过这种方式进行解释。我们可以更进一步,认为医学图像自然会受到一些描绘解剖结构的空间因素和表示成像特征的因素的影响。在这里,我们明确地学习这种分解(解缠)的成像数据表示,特别关注心脏图像。我们提出了空间分解网络(SDNet),它将 2D 医学图像分解为空间解剖因素和非空间模态因素。我们证明了这种高级表示非常适合几种医学图像分析任务,例如半监督分割、多任务分割和回归以及图像到图像合成。具体来说,我们表明,我们的模型可以匹配完全监督分割模型的性能,而仅使用一小部分标记图像。至关重要的是,我们表明,我们的分解表示也受益于在多任务设置中使用辅助任务训练模型(例如回归到已知的心脏指数)或从不同来源聚合多模态数据(例如将 MRI 和 CT 数据组合在一起)时获得的监督。为了探索学习分解的特性,我们进行了潜在空间算术运算,并表明我们可以通过交换模态因素,从 MR 合成 CT 反之亦然。我们还证明了持有图像特定信息的因素可以用于以高精度预测输入模态。代码将在 https://github.com/agis85/anatomy_modality_decomposition 上提供。