Suppr超能文献

基于双结构导向引导的无监督跨模态适配在 3D 医学图像分割中的应用。

Unsupervised Cross-Modality Adaptation via Dual Structural-Oriented Guidance for 3D Medical Image Segmentation.

出版信息

IEEE Trans Med Imaging. 2023 Jun;42(6):1774-1785. doi: 10.1109/TMI.2023.3238114. Epub 2023 Jun 1.

Abstract

Deep convolutional neural networks (CNNs) have achieved impressive performance in medical image segmentation; however, their performance could degrade significantly when being deployed to unseen data with heterogeneous characteristics. Unsupervised domain adaptation (UDA) is a promising solution to tackle this problem. In this work, we present a novel UDA method, named dual adaptation-guiding network (DAG-Net), which incorporates two highly effective and complementary structural-oriented guidance in training to collaboratively adapt a segmentation model from a labelled source domain to an unlabeled target domain. Specifically, our DAG-Net consists of two core modules: 1) Fourier-based contrastive style augmentation (FCSA) which implicitly guides the segmentation network to focus on learning modality-insensitive and structural-relevant features, and 2) residual space alignment (RSA) which provides explicit guidance to enhance the geometric continuity of the prediction in the target modality based on a 3D prior of inter-slice correlation. We have extensively evaluated our method with cardiac substructure and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our DAG-Net greatly outperforms the state-of-the-art UDA approaches for 3D medical image segmentation on unlabeled target images.

摘要

深度卷积神经网络(CNNs)在医学图像分割中取得了令人瞩目的性能;然而,当将其部署到具有异构特征的未见数据时,其性能可能会显著下降。无监督领域自适应(UDA)是解决这个问题的一种很有前途的方法。在这项工作中,我们提出了一种新的 UDA 方法,名为双适应引导网络(DAG-Net),它在训练中结合了两种高效且互补的结构导向指导,共同将分割模型从有标签的源域自适应到无标签的目标域。具体来说,我们的 DAG-Net 由两个核心模块组成:1)基于傅里叶的对比样式增强(FCSA),它隐式地引导分割网络专注于学习模态无关和结构相关的特征,2)残差空间对齐(RSA),它根据切片间相关性的 3D 先验,提供明确的指导,以增强目标模态中预测的几何连续性。我们在心脏亚结构和腹部多器官分割方面对我们的方法进行了广泛评估,以实现 MRI 和 CT 图像之间的双向跨模态自适应。在两个不同任务上的实验结果表明,我们的 DAG-Net 在未标记的目标图像上的 3D 医学图像分割方面大大优于最先进的 UDA 方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验