• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于超像素引导的类水平去噪的无源数据眼底图像分割的域自适应方法

Superpixel-guided class-level denoising for unsupervised domain adaptive fundus image segmentation without source data.

机构信息

Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.

Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.

出版信息

Comput Biol Med. 2023 Aug;162:107061. doi: 10.1016/j.compbiomed.2023.107061. Epub 2023 May 26.

DOI:10.1016/j.compbiomed.2023.107061
PMID:37263152
Abstract

Unsupervised domain adaptation (UDA), which is used to alleviate the domain shift between the source domain and target domain, has attracted substantial research interest. Previous studies have proposed effective UDA methods which require both labeled source data and unlabeled target data to achieve desirable distribution alignment. However, due to privacy concerns, the vendor side often can only trade the pretrained source model without providing the source data to the targeted client, leading to failed adaptation by classical UDA techniques. To address this issue, in this paper, a novel Superpixel-guided Class-level Denoised self-training framework (SCD) is proposed, aiming at effectively adapting the pretrained source model to the target domain in the absence of source data. Since the source data is unavailable, the model can only be trained on the target domain with the pseudo labels obtained from the pretrained source model. However, due to domain shift, the predictions obtained by the source model on the target domain are noisy. Considering this, we propose three mutual-reinforcing components tailored to our self-training framework: (i) an adaptive class-aware thresholding strategy for more balanced pseudo label generation, (ii) a masked superpixel-guided clustering method for generating multiple content-adaptive and spatial-adaptive feature centroids that enhance the discriminability of final prototypes for effective prototypical label denoising, and (iii) adaptive learning schemes for suspected noisy-labeled and correct-labeled pixels to effectively utilize the valuable information available. Comprehensive experiments on multi-site fundus image segmentation demonstrate the superior performance of our approach and the effectiveness of each component.

摘要

无监督领域自适应(UDA)用于减轻源域和目标域之间的域偏移,引起了广泛的研究兴趣。先前的研究提出了有效的 UDA 方法,这些方法需要有带标签的源数据和无标签的目标数据来实现理想的分布对齐。然而,由于隐私问题,供应商方面通常只能交易预训练的源模型,而不向目标客户提供源数据,导致经典的 UDA 技术无法适应。针对这个问题,本文提出了一种新颖的基于超像素引导的类级去噪自训练框架(SCD),旨在有效地在没有源数据的情况下将预训练的源模型适配到目标域。由于源数据不可用,模型只能在目标域上使用从预训练的源模型获得的伪标签进行训练。然而,由于域偏移,源模型在目标域上的预测是有噪声的。考虑到这一点,我们提出了三个相互加强的组件,专门针对我们的自训练框架:(i)一种自适应的类感知阈值策略,用于更平衡地生成伪标签,(ii)一种基于掩蔽超像素的引导聚类方法,用于生成多个内容自适应和空间自适应的特征质心,增强最终原型的可辨别性,从而有效地对原型标签进行去噪,(iii)用于可疑噪声标签和正确标签像素的自适应学习方案,以有效地利用可用的有价值信息。在多站点眼底图像分割的综合实验中,我们的方法表现出了优越的性能,每个组件都很有效。

相似文献

1
Superpixel-guided class-level denoising for unsupervised domain adaptive fundus image segmentation without source data.基于超像素引导的类水平去噪的无源数据眼底图像分割的域自适应方法
Comput Biol Med. 2023 Aug;162:107061. doi: 10.1016/j.compbiomed.2023.107061. Epub 2023 May 26.
2
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
3
Source-free domain adaptive segmentation with class-balanced complementary self-training.基于类平衡互补自训练的无源域自适应分割。
Artif Intell Med. 2023 Dec;146:102694. doi: 10.1016/j.artmed.2023.102694. Epub 2023 Oct 31.
4
Source free domain adaptation for medical image segmentation with fourier style mining.基于傅里叶风格挖掘的源自由域自适应医学图像分割。
Med Image Anal. 2022 Jul;79:102457. doi: 10.1016/j.media.2022.102457. Epub 2022 Apr 12.
5
S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.S-CUDA:用于医学图像分割的自清洁无监督域适应
Med Image Anal. 2021 Dec;74:102214. doi: 10.1016/j.media.2021.102214. Epub 2021 Aug 12.
6
FPL+: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation.FPL+:基于过滤伪标签的无监督跨模态三维医学图像分割自适应方法。
IEEE Trans Med Imaging. 2024 Sep;43(9):3098-3109. doi: 10.1109/TMI.2024.3387415. Epub 2024 Sep 3.
7
A medical unsupervised domain adaptation framework based on Fourier transform image translation and multi-model ensemble self-training strategy.基于傅里叶变换图像翻译和多模型集成自训练策略的医学无监督领域自适应框架。
Int J Comput Assist Radiol Surg. 2023 Oct;18(10):1885-1894. doi: 10.1007/s11548-023-02867-5. Epub 2023 Apr 3.
8
Memory consistent unsupervised off-the-shelf model adaptation for source-relaxed medical image segmentation.记忆一致的无监督现成模型适配,用于源宽松的医学图像分割。
Med Image Anal. 2023 Jan;83:102641. doi: 10.1016/j.media.2022.102641. Epub 2022 Oct 1.
9
Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation.基于标签一致性的自适应对比学习在源数据自由无监督域自适应中的应用。
Sensors (Basel). 2022 Jun 2;22(11):4238. doi: 10.3390/s22114238.
10
LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation.LE-UDA:用于医学图像分割的标签高效无监督域适应
IEEE Trans Med Imaging. 2023 Mar;42(3):633-646. doi: 10.1109/TMI.2022.3214766. Epub 2023 Mar 2.

引用本文的文献

1
Alzheimer's disease recognition via long-range state space model using multi-modal brain images.基于多模态脑图像的长程状态空间模型实现阿尔茨海默病识别
Front Neurosci. 2025 May 19;19:1576931. doi: 10.3389/fnins.2025.1576931. eCollection 2025.
2
Color detection of printing based on improved superpixel segmentation algorithm.基于改进超像素分割算法的印刷色彩检测
Sci Rep. 2024 Oct 8;14(1):23449. doi: 10.1038/s41598-024-74179-w.
3
LGIT: local-global interaction transformer for low-light image denoising.LGIT:用于低光照图像去噪的局部-全局交互变换器
Sci Rep. 2024 Sep 18;14(1):21760. doi: 10.1038/s41598-024-72912-z.