Suppr超能文献

FPL+:基于过滤伪标签的无监督跨模态三维医学图像分割自适应方法。

FPL+: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation.

出版信息

IEEE Trans Med Imaging. 2024 Sep;43(9):3098-3109. doi: 10.1109/TMI.2024.3387415. Epub 2024 Sep 3.

Abstract

Adapting a medical image segmentation model to a new domain is important for improving its cross-domain transferability, and due to the expensive annotation process, Unsupervised Domain Adaptation (UDA) is appealing where only unlabeled images are needed for the adaptation. Existing UDA methods are mainly based on image or feature alignment with adversarial training for regularization, and they are limited by insufficient supervision in the target domain. In this paper, we propose an enhanced Filtered Pseudo Label (FPL+)-based UDA method for 3D medical image segmentation. It first uses cross-domain data augmentation to translate labeled images in the source domain to a dual-domain training set consisting of a pseudo source-domain set and a pseudo target-domain set. To leverage the dual-domain augmented images to train a pseudo label generator, domain-specific batch normalization layers are used to deal with the domain shift while learning the domain-invariant structure features, generating high-quality pseudo labels for target-domain images. We then combine labeled source-domain images and target-domain images with pseudo labels to train a final segmentor, where image-level weighting based on uncertainty estimation and pixel-level weighting based on dual-domain consensus are proposed to mitigate the adverse effect of noisy pseudo labels. Experiments on three public multi-modal datasets for Vestibular Schwannoma, brain tumor and whole heart segmentation show that our method surpassed ten state-of-the-art UDA methods, and it even achieved better results than fully supervised learning in the target domain in some cases.

摘要

将医学图像分割模型适配到新的领域对于提高其跨域可转移性非常重要,由于标注过程较为昂贵,因此无监督领域自适应(UDA)方法很有吸引力,因为它只需要适应的未标注图像。现有的 UDA 方法主要基于图像或特征对齐,并使用对抗训练进行正则化,但是它们受到目标域中监督不足的限制。在本文中,我们提出了一种基于增强的滤波伪标签(FPL+)的 3D 医学图像分割的 UDA 方法。它首先使用跨域数据增强将源域中的有标签图像转换为一个由伪源域集和伪目标域集组成的双域训练集。为了利用双域增强图像来训练伪标签生成器,使用特定于域的批量归一化层来处理域转移,同时学习域不变的结构特征,为目标域图像生成高质量的伪标签。然后,我们将带有伪标签的有标签源域图像和目标域图像结合起来训练最终的分割器,其中提出了基于不确定性估计的图像级加权和基于双域一致性的像素级加权,以减轻噪声伪标签的不利影响。在 Vestibular Schwannoma、脑肿瘤和整个心脏分割的三个公共多模态数据集上的实验表明,我们的方法超过了十种最先进的 UDA 方法,在某些情况下,它甚至在目标域中取得了比完全监督学习更好的结果。

相似文献

1
FPL+: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation.
IEEE Trans Med Imaging. 2024 Sep;43(9):3098-3109. doi: 10.1109/TMI.2024.3387415. Epub 2024 Sep 3.
4
Two-stage adversarial learning based unsupervised domain adaptation for retinal OCT segmentation.
Med Phys. 2024 Aug;51(8):5374-5385. doi: 10.1002/mp.17012. Epub 2024 Mar 1.
5
S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.
Med Image Anal. 2021 Dec;74:102214. doi: 10.1016/j.media.2021.102214. Epub 2021 Aug 12.
6
LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation.
IEEE Trans Med Imaging. 2023 Mar;42(3):633-646. doi: 10.1109/TMI.2022.3214766. Epub 2023 Mar 2.
7
A medical unsupervised domain adaptation framework based on Fourier transform image translation and multi-model ensemble self-training strategy.
Int J Comput Assist Radiol Surg. 2023 Oct;18(10):1885-1894. doi: 10.1007/s11548-023-02867-5. Epub 2023 Apr 3.
8
Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation.
Med Image Anal. 2025 Apr;101:103440. doi: 10.1016/j.media.2024.103440. Epub 2024 Dec 30.
9
Memory consistent unsupervised off-the-shelf model adaptation for source-relaxed medical image segmentation.
Med Image Anal. 2023 Jan;83:102641. doi: 10.1016/j.media.2022.102641. Epub 2022 Oct 1.
10
Image-level supervision and self-training for transformer-based cross-modality tumor segmentation.
Med Image Anal. 2024 Oct;97:103287. doi: 10.1016/j.media.2024.103287. Epub 2024 Jul 31.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验