Pan Zhihui, Liu Xi, Luo Yangmei, Chen Xuhai
Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China.
State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, School of Brain Cognitive Science, Beijing Normal UniversityBeijing, China.
Front Neurosci. 2017 Jun 21;11:349. doi: 10.3389/fnins.2017.00349. eCollection 2017.
Integration of information from face and voice plays a central role in social interactions. The present study investigated the modulation of emotional intensity on the integration of facial-vocal emotional cues by recording EEG for participants while they were performing emotion identification task on facial, vocal, and bimodal angry expressions varying in emotional intensity. Behavioral results showed the rates of anger and reaction speed increased as emotional intensity across modalities. Critically, the P2 amplitudes were larger for bimodal expressions than for the sum of facial and vocal expressions for low emotional intensity stimuli, but not for middle and high emotional intensity stimuli. These findings suggested that emotional intensity modulates the integration of facial-vocal angry expressions, following the principle of Inverse Effectiveness (IE) in multimodal sensory integration.
面部和声音信息的整合在社交互动中起着核心作用。本研究通过在参与者对面部、声音和情绪强度不同的双模式愤怒表情执行情绪识别任务时记录脑电图,来探究情绪强度对面部-声音情绪线索整合的调节作用。行为结果表明,随着各模式下情绪强度的增加,愤怒识别率和反应速度也随之提高。关键的是,对于低情绪强度刺激,双模式表情的P2波幅大于面部和声音表情之和的P2波幅,但对于中高情绪强度刺激则不然。这些发现表明,情绪强度遵循多模态感官整合中的反向有效性(IE)原则,对面部-声音愤怒表情的整合产生调节作用。