Kokinous Jenny, Kotz Sonja A, Tavano Alessandro, Schröger Erich
Institute of Psychology, University of Leipzig, 04109 Leipzig, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, 04103 Leipzig, Germany, and School of Psychological Sciences, University of Manchester, Manchester, UK
Institute of Psychology, University of Leipzig, 04109 Leipzig, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, 04103 Leipzig, Germany, and School of Psychological Sciences, University of Manchester, Manchester, UK Institute of Psychology, University of Leipzig, 04109 Leipzig, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, 04103 Leipzig, Germany, and School of Psychological Sciences, University of Manchester, Manchester, UK.
Soc Cogn Affect Neurosci. 2015 May;10(5):713-20. doi: 10.1093/scan/nsu105. Epub 2014 Aug 20.
We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.
我们利用人类脑电图来研究动态愤怒表情和中性表情的早期视听整合。仅听觉条件作为解释整合效应的基线。在视听条件下,视觉信息的有效性通过与声音表情在情绪上一致或不一致的面部表情来操纵。首先,我们报告在仅听觉条件下,与中性发声相比,愤怒发声存在N1抑制效应。其次,我们证实了一致的视觉和听觉信息的早期整合,这表现为与仅听觉条件相比,视听条件下听觉N1和P2成分受到抑制。第三,视听N1抑制受到视听一致性与情绪相互作用的调节:对于中性发声,在一致和不一致的视听条件下均存在N1抑制。对于愤怒发声,仅在一致条件下存在N1抑制,而在不一致条件下不存在。扩展了先前关于动态视听整合的研究结果,当前结果表明视听N1抑制具有一致性和情绪特异性,并表明与非情绪表情相比,动态情绪表情在早期视听整合中得到优先处理。