Suppr超能文献

脑电动力学与音乐内容的融合用于估计音乐聆听中的情感反应。

Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening.

机构信息

Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA ; Center for Advanced Neurological Engineering, Institute of Engineering in Medicine, University of California San Diego, La Jolla, CA, USA.

Music and Audio Computing Lab, Research Center for IT Innovation Academia Sinica, Taipei, Taiwan.

出版信息

Front Neurosci. 2014 May 1;8:94. doi: 10.3389/fnins.2014.00094. eCollection 2014.

Abstract

Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.

摘要

基于脑电图(EEG)的音乐聆听情绪分类在当今得到了越来越多的关注,因为它具有潜在的应用前景,如音乐情感脑机接口(ABCI)、神经营销、音乐治疗和隐式多媒体标记和触发。然而,音乐是一种生态有效的复杂刺激物,通过音乐元素的组合向听众传达特定的情感。仅使用 EEG 信号来区分情绪仍然具有挑战性。本研究旨在评估通过利用 EEG 动力学和音乐内容的声学特征来进行情感效价和唤醒分类的多模态方法的适用性。为此,本研究采用机器学习方法系统地阐明 EEG 和音乐模态在情绪建模中的作用。实证结果表明,当可获得全头 EEG 信号时,包含音乐内容并不会提高分类性能。仅使用 EEG 模态获得的 74%~76%的性能在统计学上与使用多模态方法相当。然而,如果 EEG 动力学仅可从一小部分电极获得(可能在实际应用中就是这种情况),那么音乐模态将发挥补充作用,将效价分类的 EEG 结果从约 61-67%提高到约 67%,将唤醒分类的 EEG 结果从约 58-67%提高到约 67%。音乐音色似乎取代了较少可区分的 EEG 特征,从而提高了效价和唤醒分类,而音乐响度则专门有助于唤醒分类。本研究不仅为构建基于 EEG 的多模态方法提供了原则,还揭示了大脑活动和音乐内容在情绪建模中的相互作用的基本见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8916/4013455/087b335ddb7d/fnins-08-00094-g0001.jpg

相似文献

1
Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening.
Front Neurosci. 2014 May 1;8:94. doi: 10.3389/fnins.2014.00094. eCollection 2014.
2
Enhanced salience of musical sounds in singers and instrumentalists.
Cogn Affect Behav Neurosci. 2022 Oct;22(5):1044-1062. doi: 10.3758/s13415-022-01007-x. Epub 2022 May 3.
3
Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components.
Neuroreport. 2010 Apr 21;21(6):410-5. doi: 10.1097/WNR.0b013e32833774de.
4
Valence-arousal classification of emotion evoked by Chinese ancient-style music using 1D-CNN-BiLSTM model on EEG signals for college students.
Multimed Tools Appl. 2023;82(10):15439-15456. doi: 10.1007/s11042-022-14011-7. Epub 2022 Oct 4.
6
Experiencing affective music in eyes-closed and eyes-open states: an electroencephalography study.
Front Psychol. 2015 Aug 7;6:1160. doi: 10.3389/fpsyg.2015.01160. eCollection 2015.
7
Professional musicians listen differently to music.
Neuroscience. 2014 May 30;268:102-11. doi: 10.1016/j.neuroscience.2014.03.007. Epub 2014 Mar 15.
8
Analysis and recognition of a novel experimental paradigm for musical emotion brain-computer interface.
Brain Res. 2024 Sep 15;1839:149039. doi: 10.1016/j.brainres.2024.149039. Epub 2024 May 28.
9
EEG-based emotion recognition in music listening.
IEEE Trans Biomed Eng. 2010 Jul;57(7):1798-806. doi: 10.1109/TBME.2010.2048568. Epub 2010 May 3.

引用本文的文献

1
Enhancing Emotion Recognition Using Region-Specific Electroencephalogram Data and Dynamic Functional Connectivity.
Front Neurosci. 2022 May 2;16:884475. doi: 10.3389/fnins.2022.884475. eCollection 2022.
2
A review of research on neuromarketing using content analysis: key approaches and new avenues.
Cogn Neurodyn. 2021 Dec;15(6):923-938. doi: 10.1007/s11571-021-09693-y. Epub 2021 Jun 21.
3
Effective Connectivity During Rest and Music Listening: An EEG Study on Parkinson's Disease.
Front Aging Neurosci. 2021 Apr 28;13:657221. doi: 10.3389/fnagi.2021.657221. eCollection 2021.
4
Mathematical Modeling of Brain Activity under Specific Auditory Stimulation.
Comput Math Methods Med. 2021 Apr 21;2021:6676681. doi: 10.1155/2021/6676681. eCollection 2021.
5
A Literature Review of EEG-Based Affective Computing in Marketing.
Front Psychol. 2021 Mar 16;12:602843. doi: 10.3389/fpsyg.2021.602843. eCollection 2021.
6
Is EEG Suitable for Marketing Research? A Systematic Review.
Front Neurosci. 2020 Dec 21;14:594566. doi: 10.3389/fnins.2020.594566. eCollection 2020.
7
Neural Decoding of Multi-Modal Imagery Behavior Focusing on Temporal Complexity.
Front Psychiatry. 2020 Jul 30;11:746. doi: 10.3389/fpsyt.2020.00746. eCollection 2020.
8
Scaling behaviour in music and cortical dynamics interplay to mediate music listening pleasure.
Sci Rep. 2019 Nov 27;9(1):17700. doi: 10.1038/s41598-019-54060-x.
9
Impact of Affective Multimedia Content on the Electroencephalogram and Facial Expressions.
Sci Rep. 2019 Nov 8;9(1):16295. doi: 10.1038/s41598-019-52891-2.

本文引用的文献

1
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Annu Int Conf IEEE Eng Med Biol Soc. 2011;2011:6092-5. doi: 10.1109/IEMBS.2011.6091505.
2
Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.
J Neural Eng. 2011 Apr;8(2):025005. doi: 10.1088/1741-2560/8/2/025005. Epub 2011 Mar 24.
3
Explicit and implicit emotion regulation: a dual-process framework.
Cogn Emot. 2011 Apr;25(3):400-12. doi: 10.1080/02699931.2010.544160.
4
Combining Brain-Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges.
Front Neurosci. 2010 Sep 7;4. doi: 10.3389/fnins.2010.00161. eCollection 2010.
5
A comparative study of different references for EEG default mode network: the use of the infinity reference.
Clin Neurophysiol. 2010 Dec;121(12):1981-91. doi: 10.1016/j.clinph.2010.03.056. Epub 2010 Jun 12.
6
EEG-based emotion recognition in music listening.
IEEE Trans Biomed Eng. 2010 Jul;57(7):1798-806. doi: 10.1109/TBME.2010.2048568. Epub 2010 May 3.
7
Electroencephalographic dynamics of musical emotion perception revealed by independent spectral components.
Neuroreport. 2010 Apr 21;21(6):410-5. doi: 10.1097/WNR.0b013e32833774de.
8
Toward emotion aware computing: an integrated approach using multichannel neurophysiological recordings and affective visual stimuli.
IEEE Trans Inf Technol Biomed. 2010 May;14(3):589-97. doi: 10.1109/TITB.2010.2041553. Epub 2010 Feb 17.
9
EEG dynamics during music appreciation.
Annu Int Conf IEEE Eng Med Biol Soc. 2009;2009:5316-9. doi: 10.1109/IEMBS.2009.5333524.
10
Emotion recognition from EEG using higher order crossings.
IEEE Trans Inf Technol Biomed. 2010 Mar;14(2):186-97. doi: 10.1109/TITB.2009.2034649. Epub 2009 Oct 23.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验