Suppr超能文献

立体 3D 视觉显著性的计算模型。

Computational model of stereoscopic 3D visual saliency.

机构信息

LUNAM Université, Université de Nantes, Institut de Recherche en Communications et Cybernétique de Nantes, Polytech Nantes, Nantes 44306, France.

出版信息

IEEE Trans Image Process. 2013 Jun;22(6):2151-65. doi: 10.1109/TIP.2013.2246176. Epub 2013 Feb 11.

Abstract

Many computational models of visual attention performing well in predicting salient areas of 2D images have been proposed in the literature. The emerging applications of stereoscopic 3D display bring an additional depth of information affecting the human viewing behavior, and require extensions of the efforts made in 2D visual modeling. In this paper, we propose a new computational model of visual attention for stereoscopic 3D still images. Apart from detecting salient areas based on 2D visual features, the proposed model takes depth as an additional visual dimension. The measure of depth saliency is derived from the eye movement data obtained from an eye-tracking experiment using synthetic stimuli. Two different ways of integrating depth information in the modeling of 3D visual attention are then proposed and examined. For the performance evaluation of 3D visual attention models, we have created an eye-tracking database, which contains stereoscopic images of natural content and is publicly available, along with this paper. The proposed model gives a good performance, compared to that of state-of-the-art 2D models on 2D images. The results also suggest that a better performance is obtained when depth information is taken into account through the creation of a depth saliency map, rather than when it is integrated by a weighting method.

摘要

许多在预测二维图像显著区域方面表现良好的视觉注意力计算模型已经在文献中被提出。立体 3D 显示的新兴应用带来了额外的影响人类观看行为的深度信息,这需要对二维视觉建模所做的努力进行扩展。在本文中,我们提出了一种用于立体 3D 静态图像的新的视觉注意力计算模型。除了基于二维视觉特征检测显著区域外,所提出的模型还将深度作为附加的视觉维度。深度显著性的度量是从使用合成刺激的眼动实验中获得的眼动数据中得出的。然后提出并检验了两种在 3D 视觉注意力建模中整合深度信息的不同方法。为了评估 3D 视觉注意力模型的性能,我们创建了一个眼动追踪数据库,该数据库包含自然内容的立体图像,并与本文一起公开提供。与二维图像的最新二维模型相比,所提出的模型表现良好。结果还表明,通过创建深度显著图而不是通过加权方法来考虑深度信息,可以获得更好的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验