Wang Ying, Xuan Ziwei, Ho Chiuman, Qi Guo-Jun
IEEE Trans Image Process. 2023;32:4459-4471. doi: 10.1109/TIP.2023.3299196. Epub 2023 Aug 8.
Semi-supervised dense prediction tasks, such as semantic segmentation, can be greatly improved through the use of contrastive learning. However, this approach presents two key challenges: selecting informative negative samples from a highly redundant pool and implementing effective data augmentation. To address these challenges, we present an adversarial contrastive learning method specifically for semi-supervised semantic segmentation. Direct learning of adversarial negatives is adopted to retain discriminative information from the past, leading to higher learning efficiency. Our approach also leverages an advanced data augmentation strategy called AdverseMix, which combines information from under-performing classes to generate more diverse and challenging samples. Additionally, we use auxiliary labels and classifiers to prevent over-adversarial negatives from affecting the learning process. Our experiments on the Pascal VOC and Cityscapes datasets demonstrate that our method outperforms the state-of-the-art by a significant margin, even when using a small fraction of labeled data.
半监督密集预测任务,如语义分割,可以通过使用对比学习得到显著改进。然而,这种方法存在两个关键挑战:从高度冗余的样本池中选择信息丰富的负样本,以及实施有效的数据增强。为了解决这些挑战,我们提出了一种专门用于半监督语义分割的对抗对比学习方法。采用直接学习对抗性负样本的方式来保留过去的判别信息,从而提高学习效率。我们的方法还利用了一种先进的数据增强策略AdverseMix,该策略结合表现不佳类别的信息来生成更多样化和具有挑战性的样本。此外,我们使用辅助标签和分类器来防止过度对抗性负样本影响学习过程。我们在Pascal VOC和Cityscapes数据集上的实验表明,即使使用一小部分标记数据,我们的方法也能显著优于当前的先进方法。