a Department of Psychology , Goldsmiths, University of London , London , UK.
b Department of Music , Durham University , Durham , UK.
Cogn Emot. 2019 Sep;33(6):1099-1118. doi: 10.1080/02699931.2018.1541312. Epub 2018 Nov 8.
With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers' abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper's influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants ( = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0-10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.
截至 2018 年 4 月,在谷歌学术上已有超过 560 次引用,Juslin 和 Gabrielsson(1996)的一篇论文提供了证据,证明表演者能够非常准确地将他们在音乐中想要表达的情绪传达给听众。尽管已经有相关的研究在这个主题上发表,但还没有直接复制这篇论文。鉴于该论文在该领域的影响力及其研究结果的意义,进行复制是合理的。本实验加入了最近的复制工作,使用原始方法进行了五个实验室的复制。使用与原始研究相同的三个旋律,专业音乐家对七种情绪(如快乐、悲伤、愤怒等)进行了富有表现力的演奏。参与者( = 319)观看了录音,并使用 0-10 刻度对每个情绪与情绪质量的匹配程度进行评分。使用了与原始研究相同的乐器(即小提琴、人声和长笛),并增加了钢琴。为了提高实验的可及性并允许更具生态有效性的环境,录音使用基于互联网的调查平台呈现。作为对原始研究的扩展,该实验研究了音乐能力、情绪智力和情绪传染如何解释解码过程中的个体差异。结果发现,当使用参与者样本的情绪评分进行汇总时,整体解码准确率较高(57%),与原始研究的分析方法相似。然而,当对每个参与者的解码准确率进行评分时,平均准确率要低得多(31%)。与原始研究不同的是,人声被发现是最具表现力的乐器。广义线性混合效应回归建模显示,音乐训练和对音乐的情感投入对情绪解码准确率有积极影响。