Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1A, 04107 Leipzig, Germany.
Neuroimage. 2011 Sep 15;58(2):665-74. doi: 10.1016/j.neuroimage.2011.06.035. Epub 2011 Jun 22.
Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.
面对面交流是多模态的。我们不仅使用声音和面部表情;肢体语言也提供了有价值的信息。在这里,我们专注于情绪表达的多模态感知,监测脑电图 (EEG) 中不同模态相互作用的时间展开。在听觉条件下,参与者听了“啊”等情绪插入语,而在视觉条件下,他们看到了包含情绪肢体语言的无声视频片段。在视听条件下,参与者观看了带有匹配插入语的视频片段。在所有三种情况下,都使用了“愤怒”和“恐惧”等情绪以及非情绪刺激。与听觉条件相比,视听条件下的 N100 振幅明显降低,这表明视觉信息对早期听觉处理有重大影响。此外,听觉条件下可以区分愤怒和恐惧表情,但在视听条件下则不能。补充这些事件相关电位 (ERP) 发现,我们报告了在视觉和视听条件下在 alpha 和 beta 波段中存在强烈相似性,这表明在感知视听刺激时存在强烈的视觉处理成分。总的来说,我们的研究结果表明,在使用复杂且高度自然的刺激进行面对面的情感交流中,模式之间存在早期相互作用。