Suppr超能文献

不同情感音乐刺激下言语和安静状态的脑电图分析。

EEG analysis of speaking and quiet states during different emotional music stimuli.

作者信息

Lin Xianwei, Wu Xinyue, Wang Zefeng, Cai Zhengting, Zhang Zihan, Xie Guangdong, Hu Lianxin, Peyrodie Laurent

机构信息

College of Information Engineering, Huzhou University, Huzhou, China.

School of Life Sciences, Beijing University of Chinese Medicine, Beijing, China.

出版信息

Front Neurosci. 2025 Feb 3;19:1461654. doi: 10.3389/fnins.2025.1461654. eCollection 2025.

Abstract

INTRODUCTION

Music has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music influences brain activity and cognitive processes by integrating artificial intelligence with advancements in neuroscience.

METHODS

In this study, a total of 120 subjects were recruited, all of whom were students aged between 19 and 26 years. Each subject is required to listen to six 1-minute music segments expressing different emotions and speak at the 40-second mark. In terms of constructing the classification model, this study compares the classification performance of deep neural networks with other machine learning algorithms.

RESULTS

The differences in EEG signals between different emotions during speech are more pronounced compared to those in a quiet state. In the classification of EEG signals for speaking and quiet states, using deep neural network algorithms can achieve accuracies of 95.84% and 96.55%, respectively.

DISCUSSION

Under the stimulation of music with different emotions, there are certain differences in EEG between speaking and resting states. In the construction of EEG classification models, the classification performance of deep neural network algorithms is superior to other machine learning algorithms.

摘要

引言

音乐对人类情感有着深远的影响,能够引发广泛的情感反应,这一现象在音乐治疗领域已得到有效利用。鉴于音乐与语言之间的密切关系,研究人员已开始通过将人工智能与神经科学的进展相结合,探索音乐如何影响大脑活动和认知过程。

方法

在本研究中,共招募了120名受试者,他们均为年龄在19至26岁之间的学生。要求每个受试者聆听六个表达不同情感的1分钟音乐片段,并在40秒时说话。在构建分类模型方面,本研究将深度神经网络的分类性能与其他机器学习算法进行了比较。

结果

与安静状态相比,说话过程中不同情感之间的脑电图(EEG)信号差异更为明显。在对说话和安静状态的EEG信号进行分类时,使用深度神经网络算法分别可达到95.84%和96.55%的准确率。

讨论

在不同情感音乐的刺激下,说话状态和静息状态的脑电图存在一定差异。在脑电图分类模型的构建中,深度神经网络算法的分类性能优于其他机器学习算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6af5/11830716/ab6854cc3c4c/fnins-19-1461654-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验