Suppr超能文献

情感之声:通过频率标记脑电图精确定位情感语音处理

The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG.

作者信息

Vos Silke, Collignon Olivier, Boets Bart

机构信息

Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium.

Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium.

出版信息

Brain Sci. 2023 Jan 18;13(2):162. doi: 10.3390/brainsci13020162.

Abstract

Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.

摘要

成功参与社交交流需要高效处理微妙的社会交际线索。声音传达了丰富的社会信息,比如说话者的性别、身份和情绪状态。我们测试了大脑是否能够系统且自动地在一系列中性语音话语中区分并追踪周期性的情感话语流。我们记录了20名神经典型男性成年人的频率标记脑电图反应,同时以4赫兹的基本频率呈现中性话语流,每隔第三个刺激就穿插情感话语,因此情感话语的频率为1.333赫兹的奇波频率。在不同的话语流中,呈现了四种情绪(高兴、悲伤、愤怒和恐惧)作为不同的条件。为了控制低层次声学线索的影响,我们使刺激之间的差异最大化,并纳入了一个话语被打乱的控制条件。这种打乱保留了低层次的声学特征,但确保情感特征不再可识别。结果显示,所有条件下均出现了显著的奇波脑电图反应,表明每种情绪类别都能与中性刺激区分开来,并且每种情感奇波反应都显著高于打乱话语的反应。这些发现表明,情绪辨别是快速、自动的,且不仅仅由低层次的感知特征驱动。最后,在此我们展示了一个用于语音情感研究的新数据库,其中包含简短的情感话语(EVID),以及一种用于隐式语音情感辨别的创新频率标记脑电图范式。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/67fb/9954097/89ec1f9b7980/brainsci-13-00162-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验