• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

随着时间的推移,厘清自下而上与自上而下以及低级与高级对眼动的影响。

Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time.

作者信息

Schütt Heiko H, Rothkegel Lars O M, Trukenbrod Hans A, Engbert Ralf, Wichmann Felix A

机构信息

Neural Information Processing Group, Universität Tübingen, Tübingen, Germany.

Experimental and Biological Psychology, University of Potsdam, Potsdam, Germany.

出版信息

J Vis. 2019 Mar 1;19(3):1. doi: 10.1167/19.3.1.

DOI:10.1167/19.3.1
PMID:30821809
Abstract

Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.

摘要

自下而上和自上而下以及低级和高级因素会影响我们在观看自然场景时的注视位置。然而,这些因素各自的重要性以及它们如何相互作用仍是一个有争议的问题。在这里,我们通过分析它们随时间的影响来理清这些因素。为此,我们开发了一种显著性模型,该模型基于最近的早期空间视觉模型的内部表示来测量低级、自下而上的因素。为了测量高级、自上而下特征的影响,我们使用了一种基于深度神经网络的最新显著性模型。为了考虑自上而下的影响,我们在两个具有不同任务的大数据集上评估这些模型:第一,一个记忆任务;第二,一个搜索任务。我们的结果支持将视觉场景探索分为三个阶段:第一次扫视、以注视密度逐渐扩大为特征的初始引导探索,以及在大约10次注视后达到的稳定状态。初始探索和稳定状态期间的扫视目标选择与相似的感兴趣区域相关,当包含高级特征时,这些区域能得到更好的预测。在搜索数据集中,注视位置主要由自上而下的过程决定。相比之下,第一次注视遵循不同的注视密度,并且包含强烈的中央注视偏向。尽管如此,第一次注视受到图像属性的强烈引导,并且在图像呈现后最早200毫秒,注视就能通过高级信息得到更好的预测。我们得出结论,任何低级、自下而上的因素主要限于第一次扫视的产生。当考虑高级特征时,所有扫视都能得到更好的解释,并且在后期,这种高级、自上而下的控制可能会被自上而下的影响所推翻。

相似文献

1
Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time.随着时间的推移,厘清自下而上与自上而下以及低级与高级对眼动的影响。
J Vis. 2019 Mar 1;19(3):1. doi: 10.1167/19.3.1.
2
Impact of dynamic bottom-up features and top-down control on the visual exploration of moving real-world scenes in hemispatial neglect.动态自下而上特征和自上而下控制对视空间忽略症患者对移动真实场景视觉探索的影响。
Neuropsychologia. 2012 Aug;50(10):2415-25. doi: 10.1016/j.neuropsychologia.2012.06.012. Epub 2012 Jun 26.
3
Temporal evolution of the central fixation bias in scene viewing.场景观看中中央注视偏向的时间演变。
J Vis. 2017 Nov 1;17(13):3. doi: 10.1167/17.13.3.
4
Mid-level feature contributions to category-specific gaze guidance.中级特征对特定类别注视引导的贡献。
Atten Percept Psychophys. 2019 Jan;81(1):35-46. doi: 10.3758/s13414-018-1594-8.
5
What stands out in a scene? A study of human explicit saliency judgment.场景中突出的是什么?一项关于人类显性显著性判断的研究。
Vision Res. 2013 Oct 18;91:62-77. doi: 10.1016/j.visres.2013.07.016. Epub 2013 Aug 15.
6
What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition.显著性模型能对眼动做出哪些预测?编码和识别过程中注视的空间和顺序方面。
J Vis. 2008 Feb 20;8(2):6.1-17. doi: 10.1167/8.2.6.
7
Influence of disparity on fixation and saccades in free viewing of natural scenes.在自然场景自由观看中视差对注视和扫视的影响。
J Vis. 2009 Jan 22;9(1):29.1-19. doi: 10.1167/9.1.29.
8
Saliency does not account for fixations to eyes within social scenes.显著性无法解释在社交场景中对眼睛的注视。
Vision Res. 2009 Dec;49(24):2992-3000. doi: 10.1016/j.visres.2009.09.014. Epub 2009 Sep 24.
9
Complementary effects of gaze direction and early saliency in guiding fixations during free viewing.自由观看过程中注视方向和早期显著性在引导注视方面的互补作用。
J Vis. 2014 Nov 4;14(13):3. doi: 10.1167/14.13.3.
10
Saccadic context indicates information processing within visual fixations: evidence from event-related potentials and eye-movements analysis of the distractor effect.扫视语境指示视觉固视内的信息处理:来自相关事件电位和眼动分析的分心效应证据。
Int J Psychophysiol. 2011 Apr;80(1):54-62. doi: 10.1016/j.ijpsycho.2011.01.013. Epub 2011 Feb 1.

引用本文的文献

1
Distinct contributions of foveal and extrafoveal visual information to emotion judgments and gaze behavior for faces.中央凹和中央凹外视觉信息对人脸情绪判断和注视行为的不同贡献。
J Vis. 2025 Jul 1;25(8):4. doi: 10.1167/jov.25.8.4.
2
Integrating Bayesian and neural networks models for eye movement prediction in hybrid search.整合贝叶斯模型和神经网络模型用于混合搜索中的眼动预测。
Sci Rep. 2025 May 12;15(1):16482. doi: 10.1038/s41598-025-00272-3.
3
No robust evidence for an effect of head-movement propensity on central bias in head-constrained scene viewing, despite an effect on fixation duration.
尽管头部运动倾向对注视持续时间有影响,但在头部受限的场景观看中,没有确凿证据表明其对中央偏差有影响。
J Vis. 2025 Apr 1;25(4):10. doi: 10.1167/jov.25.4.10.
4
Eye and head movements in visual search in the extended field of view.视野扩展中的视觉搜索中的眼动和头动。
Sci Rep. 2024 Apr 17;14(1):8907. doi: 10.1038/s41598-024-59657-5.
5
Influence of training and expertise on deep neural network attention and human attention during a medical image classification task.在医学图像分类任务中,训练和专业知识对深度神经网络注意力和人类注意力的影响。
J Vis. 2024 Apr 1;24(4):6. doi: 10.1167/jov.24.4.6.
6
The Gaze of Schizophrenia Patients Captured by Bottom-up Saliency.自下而上的显著性捕捉到的精神分裂症患者的注视
Schizophrenia (Heidelb). 2024 Feb 20;10(1):21. doi: 10.1038/s41537-024-00438-4.
7
Retinal eccentricity modulates saliency-driven but not relevance-driven visual selection.视网膜偏心度调节突显驱动但不调节相关性驱动的视觉选择。
Atten Percept Psychophys. 2024 Jul;86(5):1609-1620. doi: 10.3758/s13414-024-02848-z. Epub 2024 Jan 25.
8
Refixation behavior in naturalistic viewing: Methods, mechanisms, and neural correlates.自然观看中的重新注视行为:方法、机制及神经关联
Atten Percept Psychophys. 2025 Jan;87(1):25-49. doi: 10.3758/s13414-023-02836-9. Epub 2024 Jan 2.
9
Face detection based on a human attention guided multi-scale model.基于人类注意力引导的多尺度模型的人脸检测。
Biol Cybern. 2023 Dec;117(6):453-466. doi: 10.1007/s00422-023-00978-5. Epub 2023 Dec 1.
10
Spatiotemporal bias of the human gaze toward hierarchical visual features during natural scene viewing.人类在观看自然场景时,注视点对分层视觉特征存在时空偏向。
Sci Rep. 2023 May 18;13(1):8104. doi: 10.1038/s41598-023-34829-x.