Suppr超能文献

现实场景中的预期:视觉背景和视觉记忆的作用。

Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.

作者信息

Coco Moreno I, Keller Frank, Malcolm George L

机构信息

Department of Psychology, University of Lisbon.

School of Informatics, University of Edinburgh.

出版信息

Cogn Sci. 2016 Nov;40(8):1995-2024. doi: 10.1111/cogs.12313. Epub 2015 Oct 30.

Abstract

The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices.

摘要

人类句子处理器能够对即将到来的语言输入做出快速预测。例如,听到动词“吃”时,会朝着视觉场景中的可食用物体引发预期性眼动(奥特曼 & 上出,1999)。然而,在生态有效情境中,预期背后的认知机制仍有待阐明。事实上,以往的研究主要使用剪贴画场景和物体阵列,这就增加了一种可能性,即预期性眼动仅限于视觉信息匮乏的情境中包含少量物体的显示。在实验1中,我们证实了预期效应在现实世界场景中会出现,并研究了这种预期背后的机制。具体而言,我们证明了现实世界场景提供了预期可以利用的上下文信息:当目标物体不在场景中时,参与者会推断并注视上下文合适的区域(例如,听到“吃”时注视桌子)。实验2研究了这种上下文推断是否需要场景同时存在,或者是否可以利用记忆表征来替代。与实验1中相同的现实世界场景呈现给参与者,但在听到句子之前场景就消失了。我们发现,即使屏幕空白时预期也会发生,包括需要进行上下文推断时。我们得出结论,预期性语言处理能够利用全局场景表征(如场景类型)来进行上下文推断。这些发现与假设上下文引导的理论相符,但对假设基于物体的视觉索引的理论提出了挑战。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验