Department of Music, York Music Psychology Group, University of York, Heslington, York, YO10 5DD, UK.
Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
Behav Res Methods. 2022 Jun;54(3):1493-1507. doi: 10.3758/s13428-021-01678-3. Epub 2021 Sep 10.
An abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants' subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.
在过去的几十年里,已经有大量关于音乐引起的情感体验的研究发表,然而,大多数研究都是在受控的实验室环境中进行的,并且依赖于主观报告。虽然偶尔也会评估面部表情,但都是使用侵入性方法(如面部肌电图(fEMG))进行测量。本研究调查了五十名参与者在现场音乐会上的情感体验。我们的目的是探索自动化面部分析是否可以在生态有效的聆听环境中检测到一群人的情感表达,以确定音乐所表达的情感是否可以预测特定的面部表情,并研究情感表达是否可以用来预测主观愉悦度和激活度的评分。在音乐会上,参与者被拍摄下来,随后使用自动化面部分析软件对面部表情进行分析。音乐会结束后,收集所有乐曲(两首快乐,两首悲伤)参与者主观愉悦度和激活度的自我报告。我们的研究结果表明,表达悲伤的乐曲会引起更多的悲伤表情(与快乐相比),而表达快乐的乐曲会引起更多的快乐表情(与悲伤相比)。而对于其他面部表情类别(愤怒、恐惧、惊讶、厌恶和中性)则没有发现差异。无论音乐片段或音乐中表达的情感如何,面部表情的快乐都可以预测主观愉悦度的评分,而面部表情的悲伤和厌恶则可以预测主观激活度的高低评分。综上所述,我们的研究结果表明,在自然音乐会环境中对面部表情进行非侵入性测量可以反映音乐所表达的情感,以及听众自身的主观体验。