Suppr超能文献

基于解缠表示和跨模态图像翻译的无监督域自适应腹部器官分割方法。

Disentangled representation and cross-modality image translation based unsupervised domain adaptation method for abdominal organ segmentation.

机构信息

College of Information Science and Technology, Donghua University, Shanghai, China.

出版信息

Int J Comput Assist Radiol Surg. 2022 Jun;17(6):1101-1113. doi: 10.1007/s11548-022-02590-7. Epub 2022 Mar 17.

Abstract

PURPOSE

Existing medical image segmentation models tend to achieve satisfactory performance when the training and test data are drawn from the same distribution, while they often produce significant performance degradation when used for the evaluation of cross-modality data. To facilitate the deployment of deep learning models in real-world medical scenarios and to mitigate the performance degradation caused by domain shift, we propose an unsupervised cross-modality segmentation framework based on representation disentanglement and image-to-image translation.

METHODS

Our approach is based on a multimodal image translation framework, which assumes that the latent space of images can be decomposed into a content space and a style space. First, image representations are decomposed into the content and style codes by the encoders and recombined to generate cross-modality images. Second, we propose content and style reconstruction losses to preserve consistent semantic information from original images and construct content discriminators to match the content distributions between source and target domains. Synthetic images with target domain style and source domain anatomical structures are then utilized for training of the segmentation model.

RESULTS

We applied our framework to the bidirectional adaptation experiments on MRI and CT images of abdominal organs. Compared to the case without adaptation, the Dice similarity coefficient (DSC) increased by almost 30 and 25% and average symmetric surface distance (ASSD) dropped by 13.3 and 12.2, respectively.

CONCLUSION

The proposed unsupervised domain adaptation framework can effectively improve the performance of cross-modality segmentation, and minimize the negative impact of domain shift. Furthermore, the translated image retains semantic information and anatomical structure. Our method significantly outperforms several competing methods.

摘要

目的

现有的医学图像分割模型在训练数据和测试数据来自同一分布时往往能取得令人满意的性能,但在用于评估跨模态数据时,其性能往往会显著下降。为了促进深度学习模型在实际医疗场景中的部署,并减轻由于领域转移导致的性能下降,我们提出了一种基于表示解缠和图像到图像翻译的无监督跨模态分割框架。

方法

我们的方法基于多模态图像翻译框架,该框架假设图像的潜在空间可以分解为内容空间和风格空间。首先,通过编码器将图像表示分解为内容和风格码,并将它们重新组合以生成跨模态图像。其次,我们提出了内容和风格重建损失来保留原始图像的一致语义信息,并构建内容判别器来匹配源域和目标域之间的内容分布。然后,使用具有目标域风格和源域解剖结构的合成图像来训练分割模型。

结果

我们将我们的框架应用于腹部器官的 MRI 和 CT 图像的双向自适应实验。与没有自适应的情况相比,Dice 相似系数(DSC)分别提高了近 30%和 25%,平均对称表面距离(ASSD)分别降低了 13.3%和 12.2%。

结论

所提出的无监督领域自适应框架可以有效地提高跨模态分割的性能,并最大限度地减少领域转移的负面影响。此外,翻译后的图像保留了语义信息和解剖结构。我们的方法明显优于几种竞争方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验