Suppr超能文献

音乐引发的情绪可以通过大脑活动和声学特征的组合来预测。

Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

作者信息

Daly Ian, Williams Duncan, Hallowell James, Hwang Faustina, Kirke Alexis, Malik Asad, Weaver James, Miranda Eduardo, Nasuto Slawomir J

机构信息

Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK.

Interdisciplinary Centre for Music Research, University of Plymouth, Plymouth, UK.

出版信息

Brain Cogn. 2015 Dec;101:1-11. doi: 10.1016/j.bandc.2015.08.003. Epub 2015 Nov 3.

Abstract

It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).

摘要

人们普遍认为,音乐能够在听众中传达并引发广泛的情感。然而,音乐是一种高度复杂的音频信号,由各种复杂的随时间和频率变化的成分组成。此外,众所周知,音乐引发的情感在不同听众之间差异很大。因此,一段音乐在特定个体中会引发何种情感并非一目了然。我们试图通过测量听众脑电图(EEG)中的活动来预测音乐在听众中引发的情感反应。我们将这些测量结果与音乐的声学描述符相结合,这种方法使我们能够将音乐视为一组复杂的随时间变化的声学特征,而无需依赖任何特定的音乐理论。我们发现了回归模型,这些模型使我们能够预测参与者的音乐引发的情感,实际反应与预测反应之间的相关性高达r = 0.234,p < 0.001。这种回归拟合表明,参与者音乐引发的情感方差中有超过20%可以通过他们的神经活动和音乐的属性来预测。考虑到脑电图和音乐中存在大量噪声、非平稳性和非线性,这是一个令人鼓舞的结果。此外,将描述播放给我们参与者的音乐的大脑活动测量结果和声学特征相结合,使我们能够以比单独使用任何一种特征类型显著更高的准确率来预测音乐引发的情感(p < 0.01)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验