Föcker Julia, Röder Brigitte
1Biological Psychology and Neuropsychology, University of Hamburg, Germany.
2School of Psychology, College of Social Science, University of Lincoln, United Kingdom.
Multisens Res. 2019 Jan 1;32(6):473-497. doi: 10.1163/22134808-20191332.
The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face-voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face-voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face-voice pairs started to differ from ERPs to incongruent emotional face-voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400-550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
本研究的目的是测试情绪信号的多感官交互是否受跨通道注意和情绪效价的调节。随机呈现面部、声音以及情绪一致或不一致的双模态面部-声音对。记录脑电图,同时要求参与者在忽略所有具有其他情绪表达的刺激以及任务无关模态的悲伤刺激的情况下,检测面部或声音中的悲伤情绪表达。与悲伤刺激与不一致情绪配对相比,参与者处理一致的悲伤面部-声音对的效率更高,并且与单模态试验相比,一致双模态试验中的表现更高,无论哪个模态与任务相关。刺激开始后180毫秒,与情绪不一致的面部-声音对的事件相关电位(ERP)相比,与情绪一致的面部-声音对的ERP开始出现差异:无论哪个模态与任务相关,如果在被关注的模态中呈现愤怒情绪,与情绪不一致的试验相比,ERP在情绪一致的试验中在刺激后180毫秒显示出更明显的正性。对于所有情绪(高兴、中性、愤怒),在400 - 550毫秒的时间范围内(N400),与一致试验相比,不一致试验观察到更大的负性,无论面部还是声音与任务相关。这些结果表明了情绪相关信息的自动交互。