Suppr超能文献

C MAL:用于无源域自适应医学图像分割的级联网络引导的类平衡多原型辅助学习

C MAL: cascaded network-guided class-balanced multi-prototype auxiliary learning for source-free domain adaptive medical image segmentation.

作者信息

Zhou Wei, Yang Xuekun, Ji Jianhang, Yi Yugen

机构信息

College of Computer Science, Shenyang Aerospace University, Shenyang, 110136, China.

Faculty of Data Science, City University of Macau, Macau, China.

出版信息

Med Biol Eng Comput. 2025 May;63(5):1551-1570. doi: 10.1007/s11517-025-03287-0. Epub 2025 Jan 20.

Abstract

Source-free domain adaptation (SFDA) has become crucial in medical image analysis, enabling the adaptation of source models across diverse datasets without labeled target domain images. Self-training, a popular SFDA approach, iteratively refines self-generated pseudo-labels using unlabeled target domain data to adapt a pre-trained model from the source domain. However, it often faces model instability due to incorrect pseudo-label accumulation and foreground-background class imbalance. This paper presents a pioneering SFDA framework, named cascaded network-guided class-balanced multi-prototype auxiliary learning (C MAL), to enhance model stability. Firstly, we introduce the cascaded translation-segmentation network (CTS-Net), which employs iterative learning between translation and segmentation networks to generate accurate pseudo-labels. The CTS-Net employs a translation network to synthesize target-like images from unreliable predictions of the initial target domain images. The synthesized results refine segmentation network training, ensuring semantic alignment and minimizing visual disparities. Subsequently, reliable pseudo-labels guide the class-balanced multi-prototype auxiliary learning network (CMAL-Net) for effective model adaptation. CMAL-Net incorporates a new multi-prototype auxiliary learning strategy with a memory network to complement source domain data. We propose a class-balanced calibration loss and multi-prototype-guided symmetry cross-entropy loss to tackle class imbalance issue and enhance model adaptability to the target domain. Extensive experiments on four benchmark fundus image datasets validate the superiority of C MAL over state-of-the-art methods, especially in scenarios with significant domain shifts. Our code is available at https://github.com/yxk-art/C2MAL .

摘要

无源域适应(SFDA)在医学图像分析中已变得至关重要,它能够在没有标记的目标域图像的情况下,使源模型适应不同的数据集。自训练是一种流行的SFDA方法,它使用未标记的目标域数据迭代地优化自生成的伪标签,以从源域适应预训练模型。然而,由于伪标签积累不正确和前景-背景类别不平衡,它经常面临模型不稳定的问题。本文提出了一个开创性的SFDA框架,称为级联网络引导的类平衡多原型辅助学习(C MAL),以增强模型稳定性。首先,我们引入了级联翻译-分割网络(CTS-Net),它在翻译网络和分割网络之间采用迭代学习来生成准确的伪标签。CTS-Net使用一个翻译网络从初始目标域图像的不可靠预测中合成类似目标的图像。合成结果优化分割网络训练,确保语义对齐并最小化视觉差异。随后,可靠的伪标签指导类平衡多原型辅助学习网络(CMAL-Net)进行有效的模型适应。CMAL-Net将一种新的多原型辅助学习策略与一个记忆网络相结合,以补充源域数据。我们提出了一种类平衡校准损失和多原型引导的对称交叉熵损失,以解决类别不平衡问题并增强模型对目标域的适应性。在四个基准眼底图像数据集上进行的广泛实验验证了C MAL相对于现有方法的优越性,特别是在具有显著域转移的场景中。我们的代码可在https://github.com/yxk-art/C2MAL上获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验