• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

边缘保持和多尺度上下文神经网络的显著目标检测。

Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.

机构信息

Department of Electronic Engineering, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China.

Data61, CSIRO, and Australian National University, Canberra, ACT, Australia.

出版信息

IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.

DOI:10.1109/TIP.2017.2756825
PMID:28952942
Abstract

In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.

摘要

本文提出了一种新颖的边缘保持和多尺度上下文神经网络,用于显著目标检测。所提出的框架旨在解决现有基于 CNN 方法的两个局限性。首先,基于区域的 CNN 方法由于独立处理每个区域,因此缺乏足够的上下文来准确定位显著目标。其次,基于像素的 CNN 方法由于卷积和池化层的存在而受到模糊边界的影响。受此启发,我们首先提出了一种基于 Fast R-CNN 框架的端到端边缘保留神经网络(命名为),以有效地生成具有清晰对象边界的显著图。后来,为了进一步提高它,多尺度空间上下文被附加到 以考虑区域之间和全局场景之间的关系。此外,我们的方法可以通过深度细化一般应用于 RGB-D 显著检测。该框架首次同时实现了清晰的检测边界和多尺度上下文鲁棒性,从而实现了优化的性能。在六个 RGB 和两个 RGB-D 基准数据集上的实验表明,所提出的方法达到了最先进的性能。

相似文献

1
Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.边缘保持和多尺度上下文神经网络的显著目标检测。
IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.
2
Embedding topological features into convolutional neural network salient object detection.将拓扑特征嵌入卷积神经网络显著目标检测中。
Neural Netw. 2020 Jan;121:308-318. doi: 10.1016/j.neunet.2019.09.009. Epub 2019 Sep 25.
3
Dynamic Selective Network for RGB-D Salient Object Detection.基于动态选择网络的 RGB-D 显著目标检测
IEEE Trans Image Process. 2021;30:9179-9192. doi: 10.1109/TIP.2021.3123548. Epub 2021 Nov 10.
4
Decomposition and Completion Network for Salient Object Detection.用于显著目标检测的分解与完成网络
IEEE Trans Image Process. 2021;30:6226-6239. doi: 10.1109/TIP.2021.3093380. Epub 2021 Jul 12.
5
Multi-Scale Global Contrast CNN for Salient Object Detection.多尺度全局对比度卷积神经网络用于显著目标检测。
Sensors (Basel). 2020 May 6;20(9):2656. doi: 10.3390/s20092656.
6
DMRA: Depth-Induced Multi-Scale Recurrent Attention Network for RGB-D Saliency Detection.DMRA:用于 RGB-D 显著度检测的深度诱导多尺度递归注意网络。
IEEE Trans Image Process. 2022;31:2321-2336. doi: 10.1109/TIP.2022.3154931. Epub 2022 Mar 11.
7
CNNs-Based RGB-D Saliency Detection via Cross-View Transfer and Multiview Fusion.基于卷积神经网络的跨视图迁移和多视图融合的 RGB-D 显著目标检测。
IEEE Trans Cybern. 2018 Nov;48(11):3171-3183. doi: 10.1109/TCYB.2017.2761775. Epub 2017 Oct 31.
8
Absolute and Relative Depth-Induced Network for RGB-D Salient Object Detection.基于绝对和相对深度信息的 RGB-D 显著目标检测网络
Sensors (Basel). 2023 Mar 30;23(7):3611. doi: 10.3390/s23073611.
9
Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning.基于多尺度判别显著融合和自举学习的深度感知显著目标检测与分割。
IEEE Trans Image Process. 2017 Sep;26(9):4204-4216. doi: 10.1109/TIP.2017.2711277.
10
Salient Object Detection with Lossless Feature Reflection and Weighted Structural Loss.基于无损特征反射和加权结构损失的显著目标检测
IEEE Trans Image Process. 2019 Jan 18. doi: 10.1109/TIP.2019.2893535.

引用本文的文献

1
Swin Transformer-Based Edge Guidance Network for RGB-D Salient Object Detection.基于Swin Transformer的RGB-D显著目标检测边缘引导网络
Sensors (Basel). 2023 Oct 29;23(21):8802. doi: 10.3390/s23218802.
2
Global Guided Cross-Modal Cross-Scale Network for RGB-D Salient Object Detection.用于RGB-D显著目标检测的全局引导跨模态跨尺度网络
Sensors (Basel). 2023 Aug 17;23(16):7221. doi: 10.3390/s23167221.
3
Meaningful Secret Image Sharing with Saliency Detection.基于显著性检测的有意义秘密图像共享
Entropy (Basel). 2022 Feb 26;24(3):340. doi: 10.3390/e24030340.
4
A Novel Method for the Complex Tube System Reconstruction and Measurement.一种用于复杂管道系统重建与测量的新方法。
Sensors (Basel). 2021 Mar 22;21(6):2207. doi: 10.3390/s21062207.
5
Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information.基于具有跳跃池化和上下文信息融合的 Faster R-CNN 算法的目标检测。
Sensors (Basel). 2020 Sep 25;20(19):5490. doi: 10.3390/s20195490.
6
Improved Faster R-CNN Traffic Sign Detection Based on a Second Region of Interest and Highly Possible Regions Proposal Network.基于第二感兴趣区域和高可能性区域提议网络的改进型更快区域卷积神经网络交通标志检测
Sensors (Basel). 2019 May 17;19(10):2288. doi: 10.3390/s19102288.