Garrido-Vásquez Patricia, Pell Marc D, Paulmann Silke, Kotz Sonja A
Department of Experimental Psychology and Cognitive Science, Justus Liebig University Giessen, Giessen, Germany.
Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
Front Hum Neurosci. 2018 Jun 12;12:244. doi: 10.3389/fnhum.2018.00244. eCollection 2018.
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
有证据表明,情感在人类大脑中以超模态的方式呈现。在现实生活中,情感面部表情通常先于语音表达的情感出现,在情感韵律处理过程中,它可以调节事件相关电位(N100和P200)。为了研究这些跨模态的情感交互作用,人们提出了两条研究路线:跨模态整合和跨模态启动。在跨模态整合研究中,视觉和听觉通道在时间上对齐,而在启动研究中,它们是连续呈现的。在这里,我们使用跨模态情感启动来研究动态视觉和听觉情感信息的交互作用。具体来说,我们呈现动态面部表情(愤怒、高兴、中性)作为启动刺激,以及带有情感语调的伪语音句子(愤怒、高兴)作为目标刺激。我们感兴趣的是启动刺激与目标刺激的一致性如何影响早期听觉事件相关电位,即N100和P200,以便更清楚地了解动态面部信息在跨模态情感预测中是如何被使用的。结果显示,与一致性和中性启动的情感韵律相比,不一致启动的情感韵律的N100振幅增强,而后两种情况没有显著差异。然而,与其他两种情况相比,中性条件下N100的峰值潜伏期显著延迟。源重建显示,在N100时间窗口内,与一致性试验相比,不一致试验中右侧海马旁回被激活。在P200范围内未观察到显著的ERP效应。我们的结果表明,动态面部表情在早期就会影响语音情感处理,并且面部表情与其随后的语音情感信号之间的情感不匹配会在大脑中引发额外的处理成本,这可能是因为在情感启动刺激与目标刺激不一致的情况下,跨模态情感预测机制被违反了。