• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过眼睛和耳朵理解事件:在动态场景中,施事和动词驱动非预期性眼动。

Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes.

作者信息

de Almeida Roberto G, Di Nardo Julia, Antal Caitlyn, von Grünau Michael W

机构信息

Department of Psychology, Concordia University, Montreal, QC, Canada.

Department of Linguistics, Yale University, New Haven, CT, United States.

出版信息

Front Psychol. 2019 Oct 10;10:2162. doi: 10.3389/fpsyg.2019.02162. eCollection 2019.

DOI:10.3389/fpsyg.2019.02162
PMID:31649574
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6795699/
Abstract

As Macnamara (1978) once asked, how can we talk about what we see? We report on a study manipulating realistic dynamic scenes and sentences aiming to understand the interaction between linguistic and visual representations in real-world situations. Specifically, we monitored participants' eye movements as they watched video clips of everyday scenes while listening to sentences describing these scenes. We manipulated two main variables. The first was the semantic class of the verb in the sentence and the second was the action/motion of the agent in the unfolding event. The sentences employed two verb classes-causatives (e.g., ) and perception/psychological (e.g., )-which impose different constraints on the nouns that serve as their grammatical complements. The scenes depicted events in which agents either moved toward a target object (always the referent of the verb-complement noun), away from it, or remained neutral performing a given activity (such as cooking). Scenes and sentences were synchronized such that the verb onset corresponded to the first video frame of the agent motion toward or away from the object. Results show effects of agent motion but weak verb-semantic restrictions: causatives draw more attention to potential referents of their grammatical complements than perception verbs only when the agent moves toward the target object. Crucially, we found no anticipatory verb-driven eye movements toward the target object, contrary to studies using non-naturalistic and static scenes. We propose a model in which linguistic and visual computations in real-world situations occur largely independent of each other during the early moments of perceptual input, but rapidly interact at a central, conceptual system using a common, propositional code. Implications for language use in real world contexts are discussed.

摘要

正如麦克纳马拉(1978)曾经问过的那样,我们如何谈论我们所看到的东西?我们报告了一项研究,该研究操纵现实动态场景和句子,旨在了解现实世界情境中语言和视觉表征之间的相互作用。具体而言,我们监测了参与者在观看日常场景视频片段并同时听描述这些场景的句子时的眼动情况。我们操纵了两个主要变量。第一个是句子中动词的语义类别,第二个是事件展开过程中施事者的动作/运动。句子使用了两类动词——使役动词(例如……)和感知/心理动词(例如……)——这两类动词对作为其语法补足语的名词施加了不同的限制。场景描绘了施事者朝着目标物体移动(目标物体始终是动词补足语名词的所指对象)、远离目标物体或保持中立进行特定活动(如烹饪)的事件。场景和句子是同步的,这样动词开始出现时对应于施事者朝着或远离物体移动的第一个视频帧。结果显示了施事者动作的影响,但动词语义限制较弱:只有当施事者朝着目标物体移动时,使役动词比感知动词更能吸引对其语法补足语潜在所指对象的注意力。至关重要的是,与使用非自然和静态场景的研究相反,我们没有发现动词驱动的朝向目标物体的预期眼动。我们提出了一个模型,在该模型中,现实世界情境中的语言和视觉计算在感知输入的早期阶段很大程度上相互独立进行,但在一个中央概念系统中使用共同的命题代码迅速相互作用。我们还讨论了该研究对现实世界语境中语言使用的启示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/8ce72b361550/fpsyg-10-02162-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/3725e2023857/fpsyg-10-02162-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/a51164bfb389/fpsyg-10-02162-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/ea3e2bcc690e/fpsyg-10-02162-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/c82c4cb069f0/fpsyg-10-02162-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/715025b30429/fpsyg-10-02162-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/a25961504cd2/fpsyg-10-02162-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/8ce72b361550/fpsyg-10-02162-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/3725e2023857/fpsyg-10-02162-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/a51164bfb389/fpsyg-10-02162-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/ea3e2bcc690e/fpsyg-10-02162-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/c82c4cb069f0/fpsyg-10-02162-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/715025b30429/fpsyg-10-02162-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/a25961504cd2/fpsyg-10-02162-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/52c0/6795699/8ce72b361550/fpsyg-10-02162-g007.jpg

相似文献

1
Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes.通过眼睛和耳朵理解事件:在动态场景中,施事和动词驱动非预期性眼动。
Front Psychol. 2019 Oct 10;10:2162. doi: 10.3389/fpsyg.2019.02162. eCollection 2019.
2
Verbal Semantics Drives Early Anticipatory Eye Movements during the Comprehension of Verb-Initial Sentences.言语语义在动词居首句子理解过程中驱动早期预期眼动。
Front Psychol. 2016 Feb 9;7:95. doi: 10.3389/fpsyg.2016.00095. eCollection 2016.
3
Anticipatory Processing in a Verb-Initial Mayan Language: Eye-Tracking Evidence During Sentence Comprehension in Tseltal.一种动词居首的玛雅语言中的预期加工:泽尔塔尔语句子理解过程中的眼动追踪证据
Cogn Sci. 2023 Jan;47(1):e13292. doi: 10.1111/cogs.13219.
4
Predictors of verb-mediated anticipatory eye movements in the visual world.视觉世界中动词介导的预期眼动的预测因素。
J Exp Psychol Learn Mem Cogn. 2017 Sep;43(9):1352-1374. doi: 10.1037/xlm0000388. Epub 2017 Mar 13.
5
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.视觉世界中的事件处理:口语句子理解过程中的预测运动路径
J Exp Psychol Learn Mem Cogn. 2016 May;42(5):804-12. doi: 10.1037/xlm0000199. Epub 2015 Oct 19.
6
To Dash or to Dawdle: Verb-Associated Speed of Motion Influences Eye Movements during Spoken Sentence Comprehension.匆匆前行还是磨蹭拖沓:与动词相关的运动速度在口语句子理解过程中影响眼动。
PLoS One. 2013 Jun 21;8(6):e67187. doi: 10.1371/journal.pone.0067187. Print 2013.
7
The influence of the immediate visual context on incremental thematic role-assignment: evidence from eye-movements in depicted events.即时视觉语境对增量主题角色分配的影响:来自所描绘事件中眼动的证据。
Cognition. 2005 Feb;95(1):95-127. doi: 10.1016/j.cognition.2004.03.002.
8
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.现实场景中的预期:视觉背景和视觉记忆的作用。
Cogn Sci. 2016 Nov;40(8):1995-2024. doi: 10.1111/cogs.12313. Epub 2015 Oct 30.
9
Prediction in a visual language: real-time sentence processing in American Sign Language across development.视觉语言中的预测:美国手语在整个发育过程中的实时句子处理
Lang Cogn Neurosci. 2018;33(4):387-401. doi: 10.1080/23273798.2017.1411961. Epub 2017 Dec 8.
10
Visual context constrains language-mediated anticipatory eye movements.视觉语境会限制语言介导的预期眼动。
Q J Exp Psychol (Hove). 2020 Mar;73(3):458-467. doi: 10.1177/1747021819881615. Epub 2019 Oct 17.

引用本文的文献

1
Causal inference: relating language to event representations and events in the world.因果推理:将语言与事件表征及现实世界中的事件联系起来。
Front Psychol. 2023 Sep 18;14:1172928. doi: 10.3389/fpsyg.2023.1172928. eCollection 2023.
2
Analysing data from the psycholinguistic visual-world paradigm: Comparison of different analysis methods.分析心理语言学视域范式中的数据:不同分析方法的比较。
Behav Res Methods. 2023 Oct;55(7):3461-3493. doi: 10.3758/s13428-022-01969-3. Epub 2022 Nov 17.

本文引用的文献

1
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.现实场景中的预期:视觉背景和视觉记忆的作用。
Cogn Sci. 2016 Nov;40(8):1995-2024. doi: 10.1111/cogs.12313. Epub 2015 Oct 30.
2
Interactive activation and mutual constraint satisfaction in perception and cognition.感知与认知中的交互激活与相互约束满足
Cogn Sci. 2014 Aug;38(6):1139-89. doi: 10.1111/cogs.12146. Epub 2014 Aug 7.
3
Language as a source of evidence for theories of spatial representation.语言作为空间表征理论的证据来源。
Perception. 2012;41(9):1128-52. doi: 10.1068/p7271.
4
Working memory: theories, models, and controversies.工作记忆:理论、模型与争议
Annu Rev Psychol. 2012;63:1-29. doi: 10.1146/annurev-psych-120710-100422. Epub 2011 Sep 27.
5
The coordinated interplay of scene, utterance, and world knowledge: evidence from eye tracking.场景、话语和世界知识的协调互动:来自眼动追踪的证据。
Cogn Sci. 2006 May 6;30(3):481-529. doi: 10.1207/s15516709cog0000_65.
6
Learning to attend: a connectionist model of situated language comprehension.学习关注:情境语言理解的连接主义模型。
Cogn Sci. 2009 May;33(3):449-96. doi: 10.1111/j.1551-6709.2009.01019.x.
7
I see what you're saying: the integration of complex speech and scenes during language comprehension.我明白你的意思了:语言理解过程中复杂言语和场景的整合。
Acta Psychol (Amst). 2011 Jun;137(2):208-16. doi: 10.1016/j.actpsy.2011.01.007. Epub 2011 Feb 8.
8
Using the visual world paradigm to study language processing: a review and critical evaluation.运用视觉世界范式研究语言加工:综述与批判性评价
Acta Psychol (Amst). 2011 Jun;137(2):151-71. doi: 10.1016/j.actpsy.2010.11.003. Epub 2011 Feb 1.
9
Vision, eye movements, and natural behavior.视觉、眼球运动与自然行为。
Vis Neurosci. 2009 Jan-Feb;26(1):51-62. doi: 10.1017/S0952523808080899. Epub 2009 Feb 10.
10
Modularity in cognition: framing the debate.认知中的模块化:构建辩论框架。
Psychol Rev. 2006 Jul;113(3):628-47. doi: 10.1037/0033-295X.113.3.628.