Holle Henning, Gunter Thomas C, Rüschemeyer Shirley-Ann, Hennenlotter Andreas, Iacoboni Marco
Max-Planck-Institute of Human Cognitive and Brain Sciences, Stephanstr 1a, Leipzig, Germany.
Neuroimage. 2008 Feb 15;39(4):2010-24. doi: 10.1016/j.neuroimage.2007.10.055. Epub 2007 Nov 13.
In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process.
在交流情境中,言语往往伴随着手势。例如,说话者倾向于通过象形手势来说明言语的某些内容,象形手势是与言语内容具有形式关系的手部动作。象形手势的意义既由其形式决定,也由其执行时的言语语境决定。因此,手势和言语在理解过程中相互作用。本研究使用功能磁共振成像(fMRI)来探究在这个相互作用过程中涉及哪些脑区。参与者观看视频,视频中的句子包含一个歧义单词(例如“她触摸了鼠标”),同时伴有一个无意义的梳理动作、一个支持更常见主导意义(例如“动物”)的手势或一个支持较不常见从属意义(例如“计算机设备”)的手势。我们假设,与伴有无意义梳理动作的句子相比,参与手势和言语相互作用的脑区在看到有手势支持的句子时会表现出更强的激活。主要结果是,与梳理动作相比,两种类型的手势(主导和从属)都激活了一系列脑区,包括左侧后颞上沟(STS)、双侧下顶叶小叶以及双侧腹侧中央前沟。鉴于STS在视听整合过程中的关键作用,这种激活可能反映了手势意义与歧义句子之间的相互作用。额下回和顶下区域的激活可能反映了一种通过观察 - 执行匹配过程来确定伴随言语的手部动作目标的机制。