Yang Yang, Sun Guoying, Zhang Tong, Wang Ruixuan, Su Jingyong
School of Computer Science and Technology, Harbin Institute of Technology at Shenzhen, Shenzhen, 518055, China.
Department of Network Intelligence, Peng Cheng Laboratory, Shenzhen, 518055, China.
Med Image Anal. 2025 Apr;101:103450. doi: 10.1016/j.media.2024.103450. Epub 2025 Jan 6.
Despite that supervised learning has demonstrated impressive accuracy in medical image segmentation, its reliance on large labeled datasets poses a challenge due to the effort and expertise required for data acquisition. Semi-supervised learning has emerged as a potential solution. However, it tends to yield satisfactory segmentation performance in the central region of the foreground, but struggles in the edge region. In this paper, we propose an innovative framework that effectively leverages unlabeled data to improve segmentation performance, especially in edge regions. Our proposed framework includes two novel designs. Firstly, we introduce a weak-to-strong perturbation strategy with corresponding feature-perturbed consistency loss to efficiently utilize unlabeled data and guide our framework in learning reliable regions. Secondly, we propose an edge-aware contrastive loss that utilizes uncertainty to select positive pairs, thereby learning discriminative pixel-level features in the edge regions using unlabeled data. In this way, the model minimizes the discrepancy of multiple predictions and improves representation ability, ultimately aiming at impressive performance on both primary and edge regions. We conducted a comparative analysis of the segmentation results on the publicly available BraTS2020 dataset, LA dataset, and the 2017 ACDC dataset. Through extensive quantification and visualization experiments under three standard semi-supervised settings, we demonstrate the effectiveness of our approach and set a new state-of-the-art for semi-supervised medical image segmentation. Our code is released publicly at https://github.com/youngyzzZ/SSL-w2sPC.
尽管监督学习在医学图像分割中已展现出令人印象深刻的准确性,但由于数据采集需要付出努力和专业知识,其对大型标注数据集的依赖带来了挑战。半监督学习已成为一种潜在的解决方案。然而,它往往在前景的中心区域能产生令人满意的分割性能,但在边缘区域表现不佳。在本文中,我们提出了一个创新框架,该框架有效利用未标注数据来提高分割性能,特别是在边缘区域。我们提出的框架包括两个新颖的设计。首先,我们引入了一种从弱到强的扰动策略以及相应的特征扰动一致性损失,以有效利用未标注数据并引导我们的框架学习可靠区域。其次,我们提出了一种边缘感知对比损失,该损失利用不确定性来选择正样本对,从而使用未标注数据在边缘区域学习有区分性的像素级特征。通过这种方式,模型最小化多个预测之间的差异并提高表示能力,最终目标是在主要区域和边缘区域都取得令人印象深刻的性能。我们对公开可用的BraTS2020数据集、LA数据集和2017年ACDC数据集的分割结果进行了对比分析。通过在三种标准半监督设置下进行广泛的量化和可视化实验,我们证明了我们方法的有效性,并为半监督医学图像分割设定了新的最先进水平。我们的代码已在https://github.com/youngyzzZ/SSL-w2sPC上公开发布。