• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

立体图像的显著度检测。

Saliency detection for stereoscopic images.

出版信息

IEEE Trans Image Process. 2014 Jun;23(6):2625-36. doi: 10.1109/TIP.2014.2305100.

DOI:10.1109/TIP.2014.2305100
PMID:24832595
Abstract

Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.

摘要

在过去几十年中,已经提出了许多用于各种多媒体处理应用的 2D 图像显著检测模型。目前,立体显示的新兴应用需要新的显著检测模型来提取显著区域。与 2D 图像的显著检测不同,立体图像的显著检测必须考虑深度特征。在本文中,我们提出了一种基于颜色、亮度、纹理和深度特征对比度的新的立体显著检测框架。从离散余弦变换系数中提取四种类型的特征,即颜色、亮度、纹理和深度,用于特征对比度计算。采用图像块之间的空间距离高斯模型来考虑局部和全局对比度计算。然后,设计了一种新的融合方法来组合特征图,以获得立体图像的最终显著图。此外,我们采用了中心偏差因子和人类视觉敏锐度,这是人类视觉系统的重要特征,来增强立体图像的最终显著图。眼动跟踪数据库上的实验结果表明,该模型的性能优于其他现有方法。

相似文献

1
Saliency detection for stereoscopic images.立体图像的显著度检测。
IEEE Trans Image Process. 2014 Jun;23(6):2625-36. doi: 10.1109/TIP.2014.2305100.
2
Boosting color saliency in image feature detection.增强图像特征检测中的颜色显著性
IEEE Trans Pattern Anal Mach Intell. 2006 Jan;28(1):150-6. doi: 10.1109/TPAMI.2006.3.
3
Regularized feature reconstruction for spatio-temporal saliency detection.正则化特征重构的时空显著检测。
IEEE Trans Image Process. 2013 Aug;22(8):3120-32. doi: 10.1109/TIP.2013.2259837.
4
Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.基于高效混合树的立体匹配及其在后期重聚焦图像中的应用。
IEEE Trans Image Process. 2014 Aug;23(8):3428-42. doi: 10.1109/TIP.2014.2329389. Epub 2014 Jun 5.
5
Detection of object motion regions in aerial image pairs with a multilayer markovian model.基于多层马尔可夫模型的航空影像对中目标运动区域检测
IEEE Trans Image Process. 2009 Oct;18(10):2303-15. doi: 10.1109/TIP.2009.2025808. Epub 2009 Jun 19.
6
Spacetime stereo: a unifying framework for depth from triangulation.时空立体视觉:一种基于三角测量的深度统一框架。
IEEE Trans Pattern Anal Mach Intell. 2005 Feb;27(2):296-302. doi: 10.1109/TPAMI.2005.37.
7
Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling.基于颜色加权相关、分层置信传播和遮挡处理的立体匹配
IEEE Trans Pattern Anal Mach Intell. 2009 Mar;31(3):492-504. doi: 10.1109/TPAMI.2008.99.
8
Distance learning for similarity estimation.用于相似度估计的远程学习。
IEEE Trans Pattern Anal Mach Intell. 2008 Mar;30(3):451-62. doi: 10.1109/TPAMI.2007.70714.
9
Feature selection in supervised saliency prediction.有监督显著度预测中的特征选择。
IEEE Trans Cybern. 2015 May;45(5):900-12. doi: 10.1109/TCYB.2014.2338893. Epub 2014 Aug 5.
10
Video saliency incorporating spatiotemporal cues and uncertainty weighting.视频显著度融合时空线索和不确定性加权。
IEEE Trans Image Process. 2014 Sep;23(9):3910-21. doi: 10.1109/TIP.2014.2336549. Epub 2014 Jul 16.

引用本文的文献

1
Analyzing fibrous tissue pattern in fibrous dysplasia bone images using deep R-CNN networks for segmentation.使用深度R-CNN网络对骨纤维异常增殖症骨图像中的纤维组织模式进行分割分析。
Soft comput. 2022;26(16):7519-7533. doi: 10.1007/s00500-021-06519-1. Epub 2021 Dec 1.
2
Deep Multimodal Fusion Autoencoder for Saliency Prediction of RGB-D Images.用于RGB-D图像显著性预测的深度多模态融合自动编码器
Comput Intell Neurosci. 2021 May 5;2021:6610997. doi: 10.1155/2021/6610997. eCollection 2021.
3
AFI-Net: Attention-Guided Feature Integration Network for RGBD Saliency Detection.
AFI-Net:用于RGBD显著目标检测的注意力引导特征融合网络。
Comput Intell Neurosci. 2021 Mar 30;2021:8861446. doi: 10.1155/2021/8861446. eCollection 2021.
4
Hierarchical Multimodal Adaptive Fusion (HMAF) Network for Prediction of RGB-D Saliency.用于预测RGB-D显著图的分层多模态自适应融合(HMAF)网络
Comput Intell Neurosci. 2020 Nov 20;2020:8841681. doi: 10.1155/2020/8841681. eCollection 2020.
5
A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space.用于模拟人类在近体空间注视点的立体图像和地面真实视差数据集。
Sci Data. 2017 Mar 28;4:170034. doi: 10.1038/sdata.2017.34.