Suppr超能文献

语言和非语言发声如何塑造对面部表情的感知——一项脑电图研究

How Linguistic and Nonlinguistic Vocalizations Shape the Perception of Emotional Faces-An Electroencephalography Study.

作者信息

Liang Junyu, Zhang Mingming, Yang Lan, Li Yiwen, Li Yuchen, Wang Li, Li Hongying, Chen Jun, Luo Wenbo

机构信息

South China Normal University.

Liaoning Normal University.

出版信息

J Cogn Neurosci. 2025 May 1;37(5):970-987. doi: 10.1162/jocn_a_02284.

Abstract

Vocal emotions are crucial in guiding visual attention toward emotionally significant environmental events, such as recognizing emotional faces. This study employed continuous EEG recordings to examine the impact of linguistic and nonlinguistic vocalizations on facial emotion processing. Participants completed a facial emotion discrimination task while viewing fearful, happy, and neutral faces. The behavioral and ERP results indicated that fearful nonlinguistic vocalizations accelerated the recognition of fearful faces and elicited a larger P1 amplitude, whereas happy linguistic vocalizations accelerated the recognition of happy faces and similarly induced a greater P1 amplitude. In recognition of fearful faces, a greater N170 component was observed in the right hemisphere when the emotional category of the priming vocalization was consistent with the face stimulus. In contrast, this effect occurred in the left hemisphere while recognizing happy faces. Representational similarity analysis revealed that the temporoparietal regions automatically differentiate between linguistic and nonlinguistic vocalizations early in face processing. In conclusion, these findings enhance our understanding of the interplay between vocalization types and facial emotion recognition, highlighting the importance of cross-modal processing in emotional perception.

摘要

语音情感对于引导视觉注意力朝向具有情感意义的环境事件至关重要,比如识别情绪化的面孔。本研究采用连续脑电图记录来检验语言和非语言发声对面部情绪加工的影响。参与者在观看恐惧、快乐和中性面孔时完成一项面部情绪辨别任务。行为学和事件相关电位结果表明,恐惧的非语言发声加快了对恐惧面孔的识别,并引发更大的P1波幅,而快乐的语言发声加快了对快乐面孔的识别,同样也诱发了更大的P1波幅。在识别恐惧面孔时,当启动发声的情感类别与面孔刺激一致时,在右半球观察到更大的N170成分。相反,在识别快乐面孔时,这种效应出现在左半球。表征相似性分析表明,颞顶叶区域在面孔加工早期就能自动区分语言和非语言发声。总之,这些发现增进了我们对发声类型与面部情绪识别之间相互作用的理解,凸显了跨模态加工在情绪感知中的重要性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验