Department of Psychology, School of Social Sciences, University of Mannheim Mannheim, Germany.
Front Psychol. 2013 Oct 18;4:741. doi: 10.3389/fpsyg.2013.00741. eCollection 2013.
In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant, and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2 s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP), independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.
在自然环境中,情感信息是通过视觉和听觉信息的融合来传达的;多模态整合至关重要。然而,在实验室中,情感研究人员大多专注于单一模态刺激的研究。少数关于多模态情感处理的现有研究关注人类交流,例如面部和声音表情的整合。本研究扩展了多模态的概念,考察了同时呈现的声音如何影响情绪图片的神经处理。22 名健康参与者观看了 20 张复杂场景的愉快、不愉快和中性图片。在关键试验中,这些图片与愉快、不愉快和中性声音配对。声音呈现于图片呈现前 500 毫秒开始,每个刺激呈现持续 2 秒。从 64 个通道记录 EEG,并对图片呈现时的 ERP 进行分析。此外,还获得了效价和唤醒评分。对情绪图片的神经处理的先前发现得到了复制。具体而言,与中性图片相比,不愉快图片与顶叶 P200 增加和更明显的中央顶叶晚期正电位 (LPP) 相关,而与伴随的声音效价无关。对于视听刺激,与中性声音相比,不愉快或愉快声音伴随的所有图片都会引起顶叶 P100 和 P200 增加。最重要的是,与不愉快图片和愉快声音不一致的视听对增强了顶叶 P100 和 P200,而与一致声音的配对相比。综上所述,本研究结果表明,情绪声音调节视觉处理的早期阶段,因此提供了一种途径,多模态经验可能增强感知。