Gayet Surya, Battistoni Elisa, Thorat Sushrut, Peelen Marius V
Experimental Psychology, Helmholtz Institute, Utrecht University.
Center for Mind/Brain Sciences, University of Trento.
J Exp Psychol Hum Percept Perform. 2024 Feb;50(2):216-231. doi: 10.1037/xhp0001172.
According to theories of visual search, observers generate a visual representation of the search target (the "attentional template") that guides spatial attention toward target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is 10 times smaller when it is parked 50 compared to 5 m away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were precued to search for a car or person in the near or far plane of an outdoor scene. In "search trials," the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed "catch-trials," two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Experiments 1 and 2) and orientation (Experiment 3) of probe stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Experiment 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
根据视觉搜索理论,观察者会生成搜索目标的视觉表征(“注意力模板”),该模板将空间注意力引向类似目标的视觉输入。然而,在现实世界视觉中,物体根据其位置会产生极为不同的视觉输入:与停在5米远处相比,你的汽车停在50米远处时产生的视网膜图像要小10倍。在四项实验中,我们研究了观察者在搜索熟悉的物体类别时,注意力模板是否纳入了观察距离。在每次试验中,参与者会预先得到提示,要在室外场景的近平面或远平面中搜索汽车或人物。在“搜索试验”中,场景再次出现,参与者必须指出搜索目标是否存在。在混合的“捕捉试验”中,两个剪影会在注视点两侧短暂呈现(与搜索目标的形状和/或预测大小相匹配),其中一个剪影之后会跟着一个探测刺激。我们发现,当探测刺激出现在大小匹配的剪影位置时,参与者在报告其位置(实验1和2)和方向(实验3)时更准确。因此,注意力模板会根据当前观察距离纳入物体的预测大小。然而,只有当剪影也与搜索目标的形状相匹配时才会出现这种情况(实验2)。我们得出结论,在场景中寻找物体的注意力模板是由特定类别的属性(形状)和对这些物体在当前观察位置可能出现的样子(大小)的上下文相关期望共同塑造的。(PsycInfo数据库记录(c)2024美国心理学会,保留所有权利)