Suppr超能文献

一种用于医学图像分割的具有伪标签和可变形卷积神经网络-Transformer的置信度引导无监督域适应网络。

A confidence-guided Unsupervised domain adaptation network with pseudo-labeling and deformable CNN-transformer for medical image segmentation.

作者信息

Zhou Jiwen, Xu Yue, Liu Zinan, Pfaender Fabien, Liu Wanyu

机构信息

School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.

UTSEUS, Shanghai University, Shanghai, 200444, China.

出版信息

Neural Netw. 2025 Nov;191:107844. doi: 10.1016/j.neunet.2025.107844. Epub 2025 Jul 8.

Abstract

Unsupervised domain adaptation (UDA) methods have achieved significant progress in medical image segmentation. Nevertheless, the significant differences between the source and target domains remain a daunting barrier, creating an urgent need for more robust cross-domain solutions. Current UDA techniques generally employ a fixed, unvarying feature alignment procedure to reduce inter-domain differences throughout the training process. This rigidity disregards the shifting nature of feature distributions throughout the training process, leading to suboptimal performance in boundary delineation and detail retention on the target domain. A novel confidence-guided unsupervised domain adaptation network (CUDA-Net) is introduced to overcome persistent domain gaps, adapt to shifting feature distributions during training, and enhance boundary delineation in the target domain. This proposed network adaptively aligns features by tracking cross-domain distribution shifts throughout training, starting with adversarial alignment at early stages (coarse) and transitioning to pseudo-label-driven alignment at later stages (fine-grained), thereby leading to more accurate segmentation in the target domain. A confidence-weighted mechanism then refines these pseudo labels by prioritizing high-confidence regions while allowing low-confidence areas to be gradually explored, thereby enhancing both label reliability and overall model stability. Experiments on three representative medical image datasets, namely MMWHS17, BraTS2021, and VS-Seg, confirm the superiority of CUDA-Net. Notably, CUDA-Net outperforms eight leading methods in terms of overall segmentation accuracy (Dice) and boundary extraction precision (ASD), highlighting that it offers an efficient and reliable solution for cross-domain medical image segmentation.

摘要

无监督域适应(UDA)方法在医学图像分割方面取得了显著进展。然而,源域和目标域之间的显著差异仍然是一个艰巨的障碍,迫切需要更强大的跨域解决方案。当前的UDA技术通常采用固定不变的特征对齐过程,以在整个训练过程中减少域间差异。这种僵化性忽视了训练过程中特征分布的变化性质,导致在目标域的边界描绘和细节保留方面性能欠佳。引入了一种新颖的置信度引导无监督域适应网络(CUDA-Net),以克服持续存在的域差距,适应训练期间变化的特征分布,并增强目标域中的边界描绘。该提议的网络通过在整个训练过程中跟踪跨域分布变化来自适应地对齐特征,从早期阶段(粗略)的对抗对齐开始,过渡到后期阶段(细粒度)的伪标签驱动对齐,从而在目标域中实现更准确的分割。然后,一种置信度加权机制通过优先处理高置信度区域,同时允许逐步探索低置信度区域来细化这些伪标签,从而提高标签可靠性和整体模型稳定性。在三个具有代表性的医学图像数据集(即MMWHS17、BraTS2021和VS-Seg)上进行的实验证实了CUDA-Net的优越性。值得注意的是,CUDA-Net在整体分割精度(Dice)和边界提取精度(ASD)方面优于八种领先方法,突出表明它为跨域医学图像分割提供了一种高效可靠的解决方案。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验