Suppr超能文献

基于预测一致性训练和主动标注增强的弱监督 RGB-D 显著目标检测

Weakly Supervised RGB-D Salient Object Detection With Prediction Consistency Training and Active Scribble Boosting.

出版信息

IEEE Trans Image Process. 2022;31:2148-2161. doi: 10.1109/TIP.2022.3151999. Epub 2022 Mar 8.

Abstract

RGB-D salient object detection (SOD) has attracted increasingly more attention as it shows more robust results in complex scenes compared with RGB SOD. However, state-of-the-art RGB-D SOD approaches heavily rely on a large amount of pixel-wise annotated data for training. Such densely labeled annotations are often labor-intensive and costly. To reduce the annotation burden, we investigate RGB-D SOD from a weakly supervised perspective. More specifically, we use annotator-friendly scribble annotations as supervision signals for model training. Since scribble annotations are much sparser compared to ground-truth masks, some critical object structure information might be neglected. To preserve such structure information, we explicitly exploit the complementary edge information from two modalities (i.e., RGB and depth). Specifically, we leverage the dual-modal edge guidance and introduce a new network architecture with a dual-edge detection module and a modality-aware feature fusion module. In order to use the useful information of unlabeled pixels, we introduce a prediction consistency training scheme by comparing the predictions of two networks optimized by different strategies. Moreover, we develop an active scribble boosting strategy to provide extra supervision signals with negligible annotation cost, leading to significant SOD performance improvement. Extensive experiments on seven benchmarks validate the superiority of our proposed method. Remarkably, the proposed method with scribble annotations achieves competitive performance in comparison to fully supervised state-of-the-art methods.

摘要

RGB-D 显著目标检测(SOD)在复杂场景中比 RGB SOD 表现出更稳健的结果,因此越来越受到关注。然而,最先进的 RGB-D SOD 方法严重依赖大量像素级注释数据进行训练。这种密集标注注释通常是劳动密集型且昂贵的。为了减轻注释负担,我们从弱监督的角度研究 RGB-D SOD。更具体地说,我们使用注释器友好的涂鸦注释作为模型训练的监督信号。由于涂鸦注释与地面实况掩模相比稀疏得多,因此可能会忽略一些关键的对象结构信息。为了保留这种结构信息,我们从两种模态(即 RGB 和深度)中显式地利用互补的边缘信息。具体来说,我们利用双模态边缘引导,并引入了一种具有双边缘检测模块和模态感知特征融合模块的新网络架构。为了利用未标记像素的有用信息,我们通过比较两个网络的预测来引入预测一致性训练方案,这两个网络是由不同策略优化的。此外,我们开发了一种主动涂鸦增强策略,以提供额外的监督信号,几乎不需要额外的注释成本,从而显著提高 SOD 性能。在七个基准上的广泛实验验证了我们提出的方法的优越性。值得注意的是,与完全监督的最新方法相比,我们提出的基于涂鸦注释的方法具有竞争力的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验