• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用自然场景中的注视位置学习显著图。

Learning a saliency map using fixated locations in natural scenes.

作者信息

Zhao Qi, Koch Christof

机构信息

Computation and Neural Systems, California Institute of Technology, Pasadena, CA, USA.

出版信息

J Vis. 2011 Mar 10;11(3):9. doi: 10.1167/11.3.9.

DOI:10.1167/11.3.9
PMID:21393388
Abstract

Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications.

摘要

受灵长类视觉系统的启发,计算显著性模型将视觉输入分解为多个预先指定通道中跨空间尺度的一组特征图。这些特征图的输出相加以生成最终的显著性图。在这里,我们使用最小二乘法从自由注视从四个近期眼动追踪数据集中提取的自然场景的受试者那里学习与这些图相关的权重。根据数据集的不同,权重可能会有很大差异,面部和方向通道通常比颜色和强度通道更重要。受试者间的差异可以忽略不计。我们还对偏向图像中心注视的偏差进行建模,并考虑导致这种偏差的时变因素和恒定因素。为了弥补标准方法在判断性能(ROC曲线下面积)方面的不足,我们使用另外两种指标来全面评估性能。尽管我们的模型保留了标准显著性模型的基本结构,但它优于几种最先进的显著性算法。此外,简单的结构使得结果适用于心理物理学和生理学中的众多研究,并导致在实际应用中极其容易实现。

相似文献

1
Learning a saliency map using fixated locations in natural scenes.利用自然场景中的注视位置学习显著图。
J Vis. 2011 Mar 10;11(3):9. doi: 10.1167/11.3.9.
2
Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost.通过使用AdaBoost以非线性方式组合特征图来学习视觉显著性。
J Vis. 2012 Jun 15;12(6):22. doi: 10.1167/12.6.22.
3
Saliency does not account for fixations to eyes within social scenes.显著性无法解释在社交场景中对眼睛的注视。
Vision Res. 2009 Dec;49(24):2992-3000. doi: 10.1016/j.visres.2009.09.014. Epub 2009 Sep 24.
4
What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition.显著性模型能对眼动做出哪些预测?编码和识别过程中注视的空间和顺序方面。
J Vis. 2008 Feb 20;8(2):6.1-17. doi: 10.1167/8.2.6.
5
Faces and text attract gaze independent of the task: Experimental data and computer model.面部和文本吸引注视,与任务无关:实验数据与计算机模型
J Vis. 2009 Nov 18;9(12):10.1-15. doi: 10.1167/9.12.10.
6
Predicting visual fixations on video based on low-level visual features.基于低级视觉特征预测视频中的视觉注视点。
Vision Res. 2007 Sep;47(19):2483-98. doi: 10.1016/j.visres.2007.06.015. Epub 2007 Aug 3.
7
What stands out in a scene? A study of human explicit saliency judgment.场景中突出的是什么?一项关于人类显性显著性判断的研究。
Vision Res. 2013 Oct 18;91:62-77. doi: 10.1016/j.visres.2013.07.016. Epub 2013 Aug 15.
8
Fixation and saliency during search of natural scenes: the case of visual agnosia.自然场景搜索过程中的注视与显著性:视觉失认症的案例
Neuropsychologia. 2009 Jul;47(8-9):1994-2003. doi: 10.1016/j.neuropsychologia.2009.03.013. Epub 2009 Mar 18.
9
Computational versus psychophysical bottom-up image saliency: a comparative evaluation study.计算与心理物理底层图像显著性:一项比较评估研究。
IEEE Trans Pattern Anal Mach Intell. 2011 Nov;33(11):2131-46. doi: 10.1109/TPAMI.2011.53.
10
Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance.混乱的视觉?破坏场景结构会阻碍焦点处理并增加自下而上的引导。
Atten Percept Psychophys. 2011 Oct;73(7):2008-25. doi: 10.3758/s13414-011-0158-y.

引用本文的文献

1
When is enough enough? Empirical guidelines to determine participant sample size for scene viewing studies.何时才足够?确定场景观看研究参与者样本量的实证指南。
Behav Res Methods. 2025 Jul 28;57(9):241. doi: 10.3758/s13428-025-02754-8.
2
An improved saliency model of visual attention dependent on image content.一种基于图像内容的视觉注意力改进显著模型。
Front Hum Neurosci. 2023 Feb 28;16:862588. doi: 10.3389/fnhum.2022.862588. eCollection 2022.
3
Visual search as an embodied process: The effects of perspective change and external reference on search performance.
视觉搜索作为一种具身过程:视角变化和外部参照对搜索表现的影响。
J Vis. 2022 Sep 2;22(10):13. doi: 10.1167/jov.22.10.13.
4
DeepGaze III: Modeling free-viewing human scanpaths with deep learning.DeepGaze III:用深度学习模拟自由观看的人类扫视轨迹。
J Vis. 2022 Apr 6;22(5):7. doi: 10.1167/jov.22.5.7.
5
Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.权衡自由观看和视觉搜索过程中影响注意力引导的因素:物体识别不确定性的意外作用。
J Vis. 2022 Mar 2;22(4):13. doi: 10.1167/jov.22.4.13.
6
Assessment of Color Perception and Preference with Eye-Tracking Analysis in a Dental Treatment Environment.在牙科治疗环境中使用眼动追踪分析评估颜色感知和偏好。
Int J Environ Res Public Health. 2021 Jul 28;18(15):7981. doi: 10.3390/ijerph18157981.
7
Attention and Information Acquisition: Comparison of Mouse-Click with Eye-Movement Attention Tracking.注意力与信息获取:鼠标点击与眼动注意力追踪的比较
J Eye Mov Res. 2018 Nov 16;11(6). doi: 10.16910/jemr.11.6.4.
8
Inherent Importance of Early Visual Features in Attraction of Human Attention.早期视觉特征在吸引人类注意力方面的内在重要性。
Comput Intell Neurosci. 2020 Dec 22;2020:3496432. doi: 10.1155/2020/3496432. eCollection 2020.
9
Face Recognition Using the SR-CNN Model.基于 SR-CNN 模型的人脸识别
Sensors (Basel). 2018 Dec 3;18(12):4237. doi: 10.3390/s18124237.
10
Social content and emotional valence modulate gaze fixations in dynamic scenes.社交内容和情绪效价调节动态场景中的注视点。
Sci Rep. 2018 Feb 28;8(1):3804. doi: 10.1038/s41598-018-22127-w.