Suppr超能文献

用于跨域息肉分割的域交互对比学习和原型引导自训练

Domain-interactive Contrastive Learning and Prototype-guided Self-training for Cross-domain Polyp Segmentation.

作者信息

Lu Ziru, Zhang Yizhe, Zhou Yi, Wu Ye, Zhou Tao

出版信息

IEEE Trans Med Imaging. 2024 Aug 14;PP. doi: 10.1109/TMI.2024.3443262.

Abstract

Accurate polyp segmentation plays a critical role from colonoscopy images in the diagnosis and treatment of colorectal cancer. While deep learning-based polyp segmentation models have made significant progress, they often suffer from performance degradation when applied to unseen target domain datasets collected from different imaging devices. To address this challenge, unsupervised domain adaptation (UDA) methods have gained attention by leveraging labeled source data and unlabeled target data to reduce the domain gap. However, existing UDA methods primarily focus on capturing class-wise representations, neglecting domain-wise representations. Additionally, uncertainty in pseudo labels could hinder the segmentation performance. To tackle these issues, we propose a novel Domain-interactive Contrastive Learning and Prototype-guided Self-training (DCL-PS) framework for cross-domain polyp segmentation. Specifically, domaininteractive contrastive learning (DCL) with a domain-mixed prototype updating strategy is proposed to discriminate class-wise feature representations across domains. Then, to enhance the feature extraction ability of the encoder, we present a contrastive learning-based cross-consistency training (CL-CCT) strategy, which is imposed on both the prototypes obtained by the outputs of the main decoder and perturbed auxiliary outputs. Furthermore, we propose a prototype-guided self-training (PS) strategy, which dynamically assigns a weight for each pixel during selftraining, filtering out unreliable pixels and improving the quality of pseudo-labels. Experimental results demonstrate the superiority of DCL-PS in improving polyp segmentation performance in the target domain. The code will be released at https://github.com/taozh2017/DCLPS.

摘要

准确的息肉分割在结肠镜图像用于结直肠癌的诊断和治疗中起着关键作用。虽然基于深度学习的息肉分割模型已经取得了显著进展,但当应用于从不同成像设备收集的未见目标域数据集时,它们往往会出现性能下降的情况。为了应对这一挑战,无监督域适应(UDA)方法通过利用有标签的源数据和无标签的目标数据来缩小域差距而受到关注。然而,现有的UDA方法主要侧重于捕获类别表征,而忽略了域表征。此外,伪标签中的不确定性可能会阻碍分割性能。为了解决这些问题,我们提出了一种用于跨域息肉分割的新型域交互对比学习和原型引导自训练(DCL-PS)框架。具体来说,提出了一种具有域混合原型更新策略的域交互对比学习(DCL),以区分跨域的类别特征表征。然后,为了增强编码器的特征提取能力,我们提出了一种基于对比学习的交叉一致性训练(CL-CCT)策略,该策略应用于由主解码器输出和扰动辅助输出获得的原型。此外,我们提出了一种原型引导自训练(PS)策略,该策略在自训练期间为每个像素动态分配权重,滤除不可靠像素并提高伪标签的质量。实验结果证明了DCL-PS在提高目标域息肉分割性能方面的优越性。代码将在https://github.com/taozh2017/DCLPS上发布。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验