Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
School of Software, Shandong University, Jinan, 250101, China.
Med Image Anal. 2021 Oct;73:102146. doi: 10.1016/j.media.2021.102146. Epub 2021 Jun 26.
Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method. To help distillate information from unlabeled images, we first design a self-paced learning strategy for co-training that lets jointly-trained neural networks focus on easier-to-segment regions first, and then gradually consider harder ones. This is achieved via an end-to-end differentiable loss in the form of a generalized Jensen Shannon Divergence (JSD). Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy. The robustness of individual models is further improved using a self-ensembling loss that enforces their prediction to be consistent across different training iterations. We demonstrate the potential of our method on three challenging image segmentation problems with different image modalities, using a small fraction of labeled data. Results show clear advantages in terms of performance compared to the standard co-training baselines and recently proposed state-of-the-art approaches for semi-supervised segmentation.
深度协同训练最近被提出作为一种在标注数据稀缺时进行图像分割的有效方法。在本文中,我们使用一种自步和自洽的协同训练方法改进了现有的半监督分割方法。为了帮助从未标注图像中提取信息,我们首先为协同训练设计了一种自步学习策略,该策略使联合训练的神经网络首先关注更容易分割的区域,然后逐渐考虑更难的区域。这是通过以广义 Jensen-Shannon 散度(JSD)的形式实现的端到端可区分损失来实现的。此外,为了鼓励来自不同网络的预测既一致又有信心,我们使用基于熵的不确定性正则化项增强了这个广义 JSD 损失。通过强制不同训练迭代之间预测一致的自集成损失进一步提高了单个模型的稳健性。我们使用少量标注数据,在三个具有不同图像模态的具有挑战性的图像分割问题上展示了我们方法的潜力。结果表明,与标准协同训练基线和最近提出的半监督分割的最先进方法相比,我们的方法在性能方面具有明显的优势。