CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal.
Cortex. 2022 Jun;151:116-132. doi: 10.1016/j.cortex.2022.02.016. Epub 2022 Mar 19.
Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.
先前的研究记录了自发和意愿情绪发声之间的感知和大脑差异。然而,情感真实性处理的时间过程仍不清楚。我们使用事件相关电位(ERPs)来解决这个问题,重点关注笑声和哭声的处理。我们还测试了真实性的神经编码是否受到注意力的影响,通过操纵任务焦点(真实性与情绪类别)和视觉条件(有与无视觉剥夺)来实现。我们记录了 43 名参与者在听发声并评估其真实性(意愿与自发)或情绪意义(悲伤与愉悦)时的 ERP。其中 22 名参与者被蒙住眼睛并在暗室中进行测试,21 名参与者在标准视觉条件下进行测试。与意愿发声相比,自发发声的笑声引起 N1 振幅减小,哭声引起 P2 增加。在后期认知处理阶段,自发(与意愿)的笑声和哭声表现出更积极的振幅(1000-1400 毫秒),而笑声的早期效应更明显(700-1000 毫秒)。视觉条件影响对情感真实性的早期(P2 范围)和晚期处理阶段(中晚期 LPP 范围)的大脑反应。任务焦点并不影响对真实性的神经反应。我们的发现表明,在声音情感处理过程中,真实性信息是早期自动编码的。它们还表明,与哭声相比,笑声的真实性编码可能更快。