Suppr超能文献

用于半监督语义分割的对抗性密集对比学习

Adversarial Dense Contrastive Learning for Semi-Supervised Semantic Segmentation.

作者信息

Wang Ying, Xuan Ziwei, Ho Chiuman, Qi Guo-Jun

出版信息

IEEE Trans Image Process. 2023;32:4459-4471. doi: 10.1109/TIP.2023.3299196. Epub 2023 Aug 8.

Abstract

Semi-supervised dense prediction tasks, such as semantic segmentation, can be greatly improved through the use of contrastive learning. However, this approach presents two key challenges: selecting informative negative samples from a highly redundant pool and implementing effective data augmentation. To address these challenges, we present an adversarial contrastive learning method specifically for semi-supervised semantic segmentation. Direct learning of adversarial negatives is adopted to retain discriminative information from the past, leading to higher learning efficiency. Our approach also leverages an advanced data augmentation strategy called AdverseMix, which combines information from under-performing classes to generate more diverse and challenging samples. Additionally, we use auxiliary labels and classifiers to prevent over-adversarial negatives from affecting the learning process. Our experiments on the Pascal VOC and Cityscapes datasets demonstrate that our method outperforms the state-of-the-art by a significant margin, even when using a small fraction of labeled data.

摘要

半监督密集预测任务,如语义分割,可以通过使用对比学习得到显著改进。然而,这种方法存在两个关键挑战:从高度冗余的样本池中选择信息丰富的负样本,以及实施有效的数据增强。为了解决这些挑战,我们提出了一种专门用于半监督语义分割的对抗对比学习方法。采用直接学习对抗性负样本的方式来保留过去的判别信息,从而提高学习效率。我们的方法还利用了一种先进的数据增强策略AdverseMix,该策略结合表现不佳类别的信息来生成更多样化和具有挑战性的样本。此外,我们使用辅助标签和分类器来防止过度对抗性负样本影响学习过程。我们在Pascal VOC和Cityscapes数据集上的实验表明,即使使用一小部分标记数据,我们的方法也能显著优于当前的先进方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验