Suppr超能文献

无处不在的适应:用于多模态心脏图像分割的点云无监督自适应和熵最小化。

Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation.

出版信息

IEEE Trans Med Imaging. 2021 Jul;40(7):1838-1851. doi: 10.1109/TMI.2021.3066683. Epub 2021 Jun 30.

Abstract

Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods.

摘要

深度学习模型对领域迁移现象敏感。尽管模型可以捕捉到相似的解剖结构,但在测试来自不同领域的图像时,它在一个领域中训练的模型无法很好地泛化,主要是因为两个领域的数据分布不同。此外,为每个新模态创建注释是一项繁琐且耗时的任务,并且还受到观察者间和观察者内变异性的影响。无监督领域自适应 (UDA) 方法旨在通过利用源域标记数据为目标域生成标签来缩小源域和目标域之间的差距。然而,当前最先进的 (SOTA) UDA 方法在源域和目标域数据不足时表现出性能下降。在本文中,我们提出了一种用于多模态心脏图像分割的新的 UDA 方法。所提出的方法基于对抗学习,并在不同空间中适应源域和目标域之间的网络特征。本文介绍了一个端到端框架,该框架集成了:a) 熵最小化,b) 输出特征空间对齐,以及 c) 基于分割模型学习的潜在特征的新型点云形状自适应。我们通过从带注释的源域(平衡稳态自由进动-MRI,bSSFP-MRI)适应到未注释的目标域(晚期钆增强-MRI,LGE-MRI),在两个心脏数据集上验证了我们的方法,用于多序列数据集;以及从 MRI(源)到 CT(目标),用于跨模态数据集。结果强调,通过在网络的不同部分强制执行对抗学习,与其他 SOTA 方法相比,所提出的方法提供了有希望的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验