Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, United States.
Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, United States.
Neuroimage. 2018 Jul 1;174:1-10. doi: 10.1016/j.neuroimage.2018.02.058. Epub 2018 Mar 1.
Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively.
有效的社交功能部分依赖于从听觉刺激中识别情绪并做出适当反应的能力。先前的研究已经揭示了大脑中参与声音所传达情感信息的区域。但是,一些表达特定情感的声音的声学特性会因产生声音的乐器而有很大的差异,例如人声或小提琴。这些大脑区域是否会对不同的情绪做出相同的反应,而不管声源如何?为了回答这个问题,我们让参与者(N=38,20 名女性)听小提琴、单簧管和人声产生的简短音频片段,每个片段传达三种目标情绪之一——快乐、悲伤和恐惧——同时使用 fMRI 测量大脑活动。我们使用多体素模式分析来测试声音的特定情绪神经反应是否可以预测乐器的特定情绪神经反应,反之亦然。全脑搜索灯分析显示,初级和次级听觉皮层、后岛叶和顶叶岛盖内的活动模式可预测声音的情感内容,无论是在乐器内还是在乐器之间。此外,前岛叶内的分类准确性与同理心的行为测量相关。这些发现表明,这些大脑区域携带特定于情绪的模式,这些模式可以跨具有不同声学特性的声音进行概括。此外,具有更高同理心能力的个体与感知情绪相关的神经模式更具独特性。这些结果扩展了先前关于人类大脑如何从听觉刺激中提取情感意义的知识,并使我们能够有效地理解和与他人建立联系。