Liang Junyu, Zhang Mingming, Yang Lan, Li Yiwen, Li Yuchen, Wang Li, Li Hongying, Chen Jun, Luo Wenbo
South China Normal University.
Liaoning Normal University.
J Cogn Neurosci. 2025 May 1;37(5):970-987. doi: 10.1162/jocn_a_02284.
Vocal emotions are crucial in guiding visual attention toward emotionally significant environmental events, such as recognizing emotional faces. This study employed continuous EEG recordings to examine the impact of linguistic and nonlinguistic vocalizations on facial emotion processing. Participants completed a facial emotion discrimination task while viewing fearful, happy, and neutral faces. The behavioral and ERP results indicated that fearful nonlinguistic vocalizations accelerated the recognition of fearful faces and elicited a larger P1 amplitude, whereas happy linguistic vocalizations accelerated the recognition of happy faces and similarly induced a greater P1 amplitude. In recognition of fearful faces, a greater N170 component was observed in the right hemisphere when the emotional category of the priming vocalization was consistent with the face stimulus. In contrast, this effect occurred in the left hemisphere while recognizing happy faces. Representational similarity analysis revealed that the temporoparietal regions automatically differentiate between linguistic and nonlinguistic vocalizations early in face processing. In conclusion, these findings enhance our understanding of the interplay between vocalization types and facial emotion recognition, highlighting the importance of cross-modal processing in emotional perception.
语音情感对于引导视觉注意力朝向具有情感意义的环境事件至关重要,比如识别情绪化的面孔。本研究采用连续脑电图记录来检验语言和非语言发声对面部情绪加工的影响。参与者在观看恐惧、快乐和中性面孔时完成一项面部情绪辨别任务。行为学和事件相关电位结果表明,恐惧的非语言发声加快了对恐惧面孔的识别,并引发更大的P1波幅,而快乐的语言发声加快了对快乐面孔的识别,同样也诱发了更大的P1波幅。在识别恐惧面孔时,当启动发声的情感类别与面孔刺激一致时,在右半球观察到更大的N170成分。相反,在识别快乐面孔时,这种效应出现在左半球。表征相似性分析表明,颞顶叶区域在面孔加工早期就能自动区分语言和非语言发声。总之,这些发现增进了我们对发声类型与面部情绪识别之间相互作用的理解,凸显了跨模态加工在情绪感知中的重要性。