Suppr超能文献

标志性手势片段揭示了手势-言语整合的什么信息:当同步性丧失时,记忆可以提供帮助。

What iconic gesture fragments reveal about gesture-speech integration: when synchrony is lost, memory can help.

机构信息

Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.

出版信息

J Cogn Neurosci. 2011 Jul;23(7):1648-63. doi: 10.1162/jocn.2010.21498. Epub 2010 Mar 29.

Abstract

The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive speech. In a pretest, the minimal duration of an iconic gesture fragment needed to disambiguate a homonym (i.e., disambiguation point) was therefore identified. In three subsequent ERP experiments, we then investigated whether the gesture information available at the disambiguation point has immediate as well as delayed consequences on the processing of a temporarily ambiguous spoken sentence, and whether these gesture-speech integration processes are susceptible to temporal synchrony. Experiment 1, which used asynchronous stimuli as well as an explicit task, showed clear N400 effects at the homonym as well as at the target word presented further downstream, suggesting that asynchrony does not prevent integration under explicit task conditions. No such effects were found when asynchronous stimuli were presented using a more shallow task (Experiment 2). Finally, when gesture fragment and homonym were synchronous, similar results as in Experiment 1 were found, even under shallow task conditions (Experiment 3). We conclude that when iconic gesture fragments and speech are in synchrony, their interaction is more or less automatic. When they are not, more controlled, active memory processes are necessary to be able to combine the gesture fragment and speech context in such a way that the homonym is disambiguated correctly.

摘要

本系列实验探讨了与句子处理过程中手势-言语整合和同步相关的几个问题。为了能够更精确地操纵手势-言语同步,我们使用手势片段而不是完整的手势,从而避免了手势与言语通常的长时间重叠。在预测试中,确定了区分同音异义词(即歧义点)所需的标志性手势片段的最小持续时间。在随后的三个 ERP 实验中,我们研究了在歧义点处提供的手势信息是否对暂时歧义的口语句子的处理具有即时和延迟的影响,以及这些手势-言语整合过程是否容易受到时间同步的影响。实验 1 使用异步刺激和明确的任务,在同音异义词和更下游的目标词上均显示出明显的 N400 效应,这表明在明确任务条件下,异步不会阻止整合。当使用更浅的任务呈现异步刺激时(实验 2),则没有发现这种效应。最后,当手势片段和同音异义词同步时,即使在浅任务条件下(实验 3),也会发现与实验 1 相似的结果。我们的结论是,当标志性手势片段和言语同步时,它们的相互作用或多或少是自动的。当它们不同步时,需要更受控制的主动记忆过程,以便能够以正确区分同音异义词的方式组合手势片段和言语上下文。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验