Zhang Ni, Han Junwei, Liu Nian
IEEE Trans Image Process. 2022;31:4556-4570. doi: 10.1109/TIP.2022.3185550. Epub 2022 Jul 18.
RGB-D co-salient object detection aims to segment co-occurring salient objects when given a group of relevant images and depth maps. Previous methods often adopt separate pipeline and use hand-crafted features, being hard to capture the patterns of co-occurring salient objects and leading to unsatisfactory results. Using end-to-end CNN models is a straightforward idea, but they are less effective in exploiting global cues due to the intrinsic limitation. Thus, in this paper, we alternatively propose an end-to-end transformer-based model which uses class tokens to explicitly capture implicit class knowledge to perform RGB-D co-salient object detection, denoted as CTNet. Specifically, we first design adaptive class tokens for individual images to explore intra-saliency cues and then develop common class tokens for the whole group to explore inter-saliency cues. Besides, we also leverage the complementary cues between RGB images and depth maps to promote the learning of the above two types of class tokens. In addition, to promote model evaluation, we construct a challenging and large-scale benchmark dataset, named RGBD CoSal1k, which collects 106 groups containing 1000 pairs of RGB-D images with complex scenarios and diverse appearances. Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method.
RGB-D共显著目标检测旨在在给定一组相关图像和深度图时分割同时出现的显著目标。先前的方法通常采用单独的流程并使用手工制作的特征,难以捕捉同时出现的显著目标的模式,导致结果不尽人意。使用端到端的卷积神经网络(CNN)模型是一个直接的想法,但由于其固有的局限性,它们在利用全局线索方面效果较差。因此,在本文中,我们提出了一种基于端到端Transformer的模型,该模型使用类别令牌来明确捕捉隐式类别知识,以执行RGB-D共显著目标检测,称为CTNet。具体来说,我们首先为单个图像设计自适应类别令牌以探索内部显著线索,然后为整个组开发通用类别令牌以探索相互显著线索。此外,我们还利用RGB图像和深度图之间的互补线索来促进上述两种类别令牌的学习。此外,为了促进模型评估,我们构建了一个具有挑战性的大规模基准数据集,名为RGBD CoSal1k,它收集了106组包含1000对具有复杂场景和多样外观的RGB-D图像。在三个基准数据集上的实验结果证明了我们提出的方法的有效性。