Dibben Nicola, Coutinho Eduardo, Vilar José A, Estévez-Pérez Graciela
Department of Music, University of Sheffield, Sheffield, United Kingdom.
Department of Music, University of Liverpool, Liverpool, United Kingdom.
Front Behav Neurosci. 2018 Aug 27;12:184. doi: 10.3389/fnbeh.2018.00184. eCollection 2018.
Comparison of emotion perception in music and prosody has the potential to contribute to an understanding of their speculated shared evolutionary origin. Previous research suggests shared sensitivity to and processing of music and speech, but less is known about how emotion perception in the auditory domain might be influenced by individual differences. Personality, emotional intelligence, gender, musical training and age exert some influence on discrete, summative judgments of perceived emotion in music and speech stimuli. However, music and speech are temporal phenomena, and little is known about whether individual differences influence moment-by-moment perception of emotion in these domains. A behavioral study collected two main types of data: continuous ratings of perceived emotion while listening to extracts of music and speech, using a computer interface which modeled emotion on two dimensions (arousal and valence), and demographic information including measures of personality (TIPI) and emotional intelligence (TEIQue-SF). Functional analysis of variance on the time series data revealed a small number of statistically significant differences associated with Emotional Stability, Agreeableness, musical training and age. The results indicate that individual differences exert limited influence on continuous judgments of dynamic, naturalistic expressions. We suggest that this reflects a reliance on acoustic cues to emotion in moment-by-moment judgments of perceived emotions and is further evidence of the shared sensitivity to and processing of music and speech.
音乐与韵律中情感感知的比较,有可能有助于理解它们推测的共同进化起源。先前的研究表明,人们对音乐和语言具有共同的敏感性和处理能力,但对于听觉领域中的情感感知如何受到个体差异的影响,我们了解得较少。人格、情商、性别、音乐训练和年龄,对音乐和言语刺激中感知到的情感的离散性、总结性判断会产生一定影响。然而,音乐和言语是具有时间性的现象,对于个体差异是否会影响这些领域中情感的即时感知,我们所知甚少。一项行为研究收集了两种主要类型的数据:在聆听音乐和言语片段时,使用在两个维度(唤醒度和效价)上对情感进行建模的计算机界面,对感知到的情感进行连续评分;以及包括人格测量(大五人格简式量表)和情商测量(特质情绪智力问卷简版)在内的人口统计学信息。对时间序列数据进行的功能方差分析显示,有少量与情绪稳定性、宜人性、音乐训练和年龄相关的具有统计学意义的差异。结果表明,个体差异对动态、自然主义表达的连续判断影响有限。我们认为,这反映了在对感知到的情感进行即时判断时,人们依赖于情感的声学线索,并且进一步证明了对音乐和言语具有共同的敏感性和处理能力。