Suppr超能文献

视觉搜索过程中从场景中进行情境线索的时间性和周边性提取。

Temporal and peripheral extraction of contextual cues from scenes during visual search.

作者信息

Koehler Kathryn, Eckstein Miguel P

机构信息

Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA,

出版信息

J Vis. 2017 Feb 1;17(2):16. doi: 10.1167/17.2.16.

Abstract

Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

摘要

已知场景上下文有助于物体识别并指导视觉搜索,但很少有研究专注于分离基于图像的线索并评估它们对眼动引导和搜索性能的贡献。在这里,我们探索了三种类型的上下文线索(一个同时出现的物体、其他物体的配置以及背景元素的上级类别),并在线索组合框架及其提取的时间展开中评估它们对搜索性能的联合贡献。我们还评估了观察者在视觉外周提取每种上下文线索的能力是否是一个瓶颈,该瓶颈决定了每种线索对搜索引导和决策准确性的利用和贡献。我们发现,在视觉搜索任务的前四次注视期间,观察者首先利用物体的配置进行粗略的眼动引导,随后使用同时出现的物体信息进行更精细的引导。在没有上下文线索的情况下,观察者在报告目标物体不存在时存在次优偏差。同时出现的物体的存在是唯一对减少决策偏差有显著影响的上下文线索。基于物体的线索对眼动的早期影响通过观察者能够在视觉外周高达16°处提取物体线索的清晰演示得到了证实。线索对决策搜索准确性的联合贡献近似于统计独立线索组合和最优线索组合所预期的贡献。最后,基于背景的上下文线索对搜索引导缺乏利用和贡献,这不能通过视觉外周中上下文线索的可用性来解释;相反,这与背景线索提供关于场景中目标精确位置的固有信息最少有关。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验