Department of English, National Chengchi University, Taipei, Taiwan; Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan.
Department of Psychology, National Chengchi University, Taipei, Taiwan.
Neuropsychologia. 2023 Nov 5;190:108697. doi: 10.1016/j.neuropsychologia.2023.108697. Epub 2023 Oct 11.
Co-speech gestures are integral to human communication and exhibit diverse forms, each serving a distinct communication function. However, existing literature has focused on individual gesture types, leaving a gap in understanding the comparative neural processing of these diverse forms. To address this, our study investigated the neural processing of two types of iconic gestures: those representing attributes or event knowledge of entity concepts, beat gestures enacting rhythmic manual movements without semantic information, and self-adaptors. During functional magnetic resonance imaging, systematic randomization and attentive observation of video stimuli revealed a general neural substrate for co-speech gesture processing primarily in the bilateral middle temporal and inferior parietal cortices, characterizing visuospatial attention, semantic integration of cross-modal information, and multisensory processing of manual and audiovisual inputs. Specific types of gestures and grooming movements elicited distinct neural responses. Greater activity in the right supramarginal and inferior frontal regions was specific to self-adaptors, and is relevant to the spatiomotor and integrative processing of speech and gestures. The semantic and sensorimotor regions were least active for beat gestures. The processing of attribute gestures was most pronounced in the left posterior middle temporal gyrus upon access to knowledge of entity concepts. This fMRI study illuminated the neural underpinnings of gesture-speech integration and highlighted the differential processing pathways for various co-speech gestures.
伴随言语的手势是人类交流不可或缺的一部分,具有多种形式,每种形式都具有独特的交流功能。然而,现有文献主要关注个别手势类型,对于这些不同形式的比较神经处理仍存在空白。为了解决这个问题,我们的研究调查了两种类型的象似手势的神经处理:一种是代表实体概念属性或事件知识的手势,另一种是没有语义信息的表现节奏性手动运动的击掌手势,还有一种是自我适应手势。在功能磁共振成像期间,通过系统随机化和对视频刺激的注意力观察,揭示了伴随言语手势处理的一般神经基础,主要位于双侧颞中回和下顶叶皮层,其特征是视觉空间注意、跨模态信息的语义整合以及手动和视听输入的多感觉处理。特定类型的手势和梳理动作引起了不同的神经反应。右侧缘上回和额下回区域的活动增加与自我适应手势有关,与言语和手势的空间运动和整合处理有关。拍掌手势的语义和感觉运动区域的活动最少。属性手势的处理在访问实体概念的知识时,在左后颞中回最为明显。这项 fMRI 研究阐明了手势-言语整合的神经基础,并强调了各种伴随言语手势的不同处理途径。