Suppr超能文献

基于超像素引导的类水平去噪的无源数据眼底图像分割的域自适应方法

Superpixel-guided class-level denoising for unsupervised domain adaptive fundus image segmentation without source data.

机构信息

Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.

Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.

出版信息

Comput Biol Med. 2023 Aug;162:107061. doi: 10.1016/j.compbiomed.2023.107061. Epub 2023 May 26.

Abstract

Unsupervised domain adaptation (UDA), which is used to alleviate the domain shift between the source domain and target domain, has attracted substantial research interest. Previous studies have proposed effective UDA methods which require both labeled source data and unlabeled target data to achieve desirable distribution alignment. However, due to privacy concerns, the vendor side often can only trade the pretrained source model without providing the source data to the targeted client, leading to failed adaptation by classical UDA techniques. To address this issue, in this paper, a novel Superpixel-guided Class-level Denoised self-training framework (SCD) is proposed, aiming at effectively adapting the pretrained source model to the target domain in the absence of source data. Since the source data is unavailable, the model can only be trained on the target domain with the pseudo labels obtained from the pretrained source model. However, due to domain shift, the predictions obtained by the source model on the target domain are noisy. Considering this, we propose three mutual-reinforcing components tailored to our self-training framework: (i) an adaptive class-aware thresholding strategy for more balanced pseudo label generation, (ii) a masked superpixel-guided clustering method for generating multiple content-adaptive and spatial-adaptive feature centroids that enhance the discriminability of final prototypes for effective prototypical label denoising, and (iii) adaptive learning schemes for suspected noisy-labeled and correct-labeled pixels to effectively utilize the valuable information available. Comprehensive experiments on multi-site fundus image segmentation demonstrate the superior performance of our approach and the effectiveness of each component.

摘要

无监督领域自适应(UDA)用于减轻源域和目标域之间的域偏移,引起了广泛的研究兴趣。先前的研究提出了有效的 UDA 方法,这些方法需要有带标签的源数据和无标签的目标数据来实现理想的分布对齐。然而,由于隐私问题,供应商方面通常只能交易预训练的源模型,而不向目标客户提供源数据,导致经典的 UDA 技术无法适应。针对这个问题,本文提出了一种新颖的基于超像素引导的类级去噪自训练框架(SCD),旨在有效地在没有源数据的情况下将预训练的源模型适配到目标域。由于源数据不可用,模型只能在目标域上使用从预训练的源模型获得的伪标签进行训练。然而,由于域偏移,源模型在目标域上的预测是有噪声的。考虑到这一点,我们提出了三个相互加强的组件,专门针对我们的自训练框架:(i)一种自适应的类感知阈值策略,用于更平衡地生成伪标签,(ii)一种基于掩蔽超像素的引导聚类方法,用于生成多个内容自适应和空间自适应的特征质心,增强最终原型的可辨别性,从而有效地对原型标签进行去噪,(iii)用于可疑噪声标签和正确标签像素的自适应学习方案,以有效地利用可用的有价值信息。在多站点眼底图像分割的综合实验中,我们的方法表现出了优越的性能,每个组件都很有效。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验