Suppr超能文献

基于深度融合的 RGBD 显著目标检测。

RGBD Salient Object Detection via Deep Fusion.

出版信息

IEEE Trans Image Process. 2017 May;26(5):2274-2285. doi: 10.1109/TIP.2017.2682981. Epub 2017 Mar 15.

Abstract

Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.

摘要

已经做出了许多努力来设计各种用于 RGBD 显著检测的低级显著线索,例如颜色和深度对比度特征以及背景和颜色紧凑性先验。然而,这些低级显著线索如何相互作用以及如何有效地将它们结合起来生成主显著图仍然是具有挑战性的问题。在本文中,我们设计了一个新的卷积神经网络 (CNN) 来自动学习 RGBD 显著目标检测的交互机制。与现有工作不同,现有工作直接将原始图像像素输入到 CNN 中,而我们的方法则利用传统显著检测中获得的知识,通过采用各种灵活且可解释的显著特征向量作为输入。这引导 CNN 学习现有特征的组合,从而更有效地预测显著度,这比直接在像素上操作要简单得多。然后,我们将基于超像素的拉普拉斯传播框架与训练好的 CNN 相结合,通过利用输入图像的内在结构来提取空间一致的显著图。在三个数据集上进行的广泛定量和定性实验评估表明,所提出的方法始终优于最先进的方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验