Suppr超能文献

看看我能做什么:当说话者描述潜在动作时,物体可供性引导视觉注意。

Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.

机构信息

Department of Psychology, University of California, Davis, Davis, CA, 95616, USA.

Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, USA.

出版信息

Atten Percept Psychophys. 2022 Jul;84(5):1583-1610. doi: 10.3758/s13414-022-02467-6. Epub 2022 Apr 28.

Abstract

As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.

摘要

当我们对周围的世界做出反应时,我们的眼睛会寻找我们计划与之互动的物体。越来越多的证据表明,显性视觉注意力会选择环境中可以与之互动的物体,即使任务排除了身体互动。在之前的工作中,当静态场景描绘可到达的空间时,允许抓握交互的物体影响了注意力,而其他情况下,注意力则更好地由一般信息量来解释。由于抓握只是许多物体交互之一,因此之前的工作可能低估了物体可及性对注意力的影响。本研究调查了显性视觉注意力与物体可及性与场景中广义语义信息之间的关系,当说话者描述或记忆场景时。除了捕捉场景中信息量和可抓握物体可及性的意义图和抓握图之外,我们还引入了交互图,它更广泛地捕捉可及性。在对 5 个眼动追踪实验的混合效应分析中,我们发现,在一般描述任务和场景记忆过程中,意义预测了注视位置。仅在描绘可到达空间的场景的动作描述期间,抓握图略微预测了注视位置。仅在描述实验中,交互图预测了注视区域。我们的发现表明,当观察者谈论场景时,他们会将注意力分配到可以轻松与之互动的场景区域,而在任务不鼓励仔细考虑场景中的物体时,一般信息量则优先引导注意力。本研究表明,场景中物体可及性对视觉注意力的影响受到任务需求的调节。

相似文献

1
Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.
Atten Percept Psychophys. 2022 Jul;84(5):1583-1610. doi: 10.3758/s13414-022-02467-6. Epub 2022 Apr 28.
2
Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.
J Exp Psychol Learn Mem Cogn. 2020 Sep;46(9):1659-1681. doi: 10.1037/xlm0000837. Epub 2020 Apr 9.
4
Semantic guidance of eye movements in real-world scenes.
Vision Res. 2011 May 25;51(10):1192-205. doi: 10.1016/j.visres.2011.03.010. Epub 2011 Mar 21.
6
Visual attention during seeing for speaking in healthy aging.
Psychol Aging. 2023 Feb;38(1):49-66. doi: 10.1037/pag0000718. Epub 2022 Nov 17.
7
Rapid Extraction of the Spatial Distribution of Physical Saliency and Semantic Informativeness from Natural Scenes in the Human Brain.
J Neurosci. 2022 Jan 5;42(1):97-108. doi: 10.1523/JNEUROSCI.0602-21.2021. Epub 2021 Nov 8.
8
Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.
Psychol Sci. 2021 Aug;32(8):1262-1270. doi: 10.1177/0956797621994768. Epub 2021 Jul 12.
9
Searching for meaning: Local scene semantics guide attention during natural visual search in scenes.
Q J Exp Psychol (Hove). 2023 Mar;76(3):632-648. doi: 10.1177/17470218221101334. Epub 2022 Jun 8.
10
The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.
Vision Res. 2014 Dec;105:10-20. doi: 10.1016/j.visres.2014.08.019. Epub 2014 Sep 6.

引用本文的文献

1
CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision.
Proc ACM Symp User Interface Softw Tech. 2024 Oct;2024. doi: 10.1145/3654777.3676449. Epub 2024 Oct 11.
2
The label-feedback effect is influenced by target category in visual search.
PLoS One. 2024 Aug 1;19(8):e0306736. doi: 10.1371/journal.pone.0306736. eCollection 2024.
3
Speakers prioritise affordance-based object semantics in scene descriptions.
Lang Cogn Neurosci. 2023;38(8):1045-1067. doi: 10.1080/23273798.2023.2190136. Epub 2023 Mar 30.

本文引用的文献

1
Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.
Psychol Sci. 2021 Aug;32(8):1262-1270. doi: 10.1177/0956797621994768. Epub 2021 Jul 12.
4
When more is more: redundant modifiers can facilitate visual search.
Cogn Res Princ Implic. 2021 Feb 17;6(1):10. doi: 10.1186/s41235-021-00275-4.
5
Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex.
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29354-29362. doi: 10.1073/pnas.1912333117.
6
Why do we retrace our visual steps? Semantic and episodic memory in gaze reinstatement.
Learn Mem. 2020 Jun 15;27(7):275-283. doi: 10.1101/lm.051227.119. Print 2020 Jul.
8
Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.
J Exp Psychol Learn Mem Cogn. 2020 Sep;46(9):1659-1681. doi: 10.1037/xlm0000837. Epub 2020 Apr 9.
9
The grounding of abstract concepts in the motor and visual system: An fMRI study.
Cortex. 2020 Mar;124:1-22. doi: 10.1016/j.cortex.2019.10.014. Epub 2019 Nov 13.
10
Center bias outperforms image salience but not semantics in accounting for attention during scene viewing.
Atten Percept Psychophys. 2020 Jun;82(3):985-994. doi: 10.3758/s13414-019-01849-7.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验