Chen Qiuyu, Mao Xiaoqian, Song Yuebin, Wang Kefa
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China.
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China.
J Neurosci Methods. 2025 Mar;415:110360. doi: 10.1016/j.jneumeth.2025.110360. Epub 2025 Jan 6.
Recognition of emotion changes is of great significance to a person's physical and mental health. At present, EEG-based emotion recognition methods are mainly focused on time or frequency domains, but rarely on spatial information. Therefore, the goal of this study is to improve the performance of emotion recognition by integrating frequency and spatial domain information under multi-frequency bands.
Firstly, EEG signals of four frequency bands are extracted, and then three frequency-spatial features of differential entropy (DE) symmetric difference (SD) and symmetric quotient (SQ) are separately calculated. Secondly, according to the distribution of EEG electrodes, a series of brain maps are constructed by three frequency-spatial features for each frequency band. Thirdly, a Multi-Parallel-Input Convolutional Neural Network (MPICNN) uses the constructed brain maps to train and obtain the emotion recognition model. Finally, the subject-dependent experiments are conducted on DEAP and SEED-IV datasets.
The experimental results of DEAP dataset show that the average accuracy of four-class emotion recognition, namely, high-valence high-arousal, high-valence low-arousal, low-valence high-arousal and low-valence low-arousal, reaches 98.71 %. The results of SEED-IV dataset show the average accuracy of four-class emotion recognition, namely, happy, sad, neutral and fear reaches 92.55 %.
This method has a best classification performance compared with the state-of-the-art methods on both four-class emotion recognition datasets.
This EEG-based emotion recognition method fused multi-frequency-spatial features under multi-frequency bands, and effectively improved the recognition performance compared with the existing methods.
情绪变化的识别对人的身心健康具有重要意义。目前,基于脑电图的情绪识别方法主要集中在时域或频域,而很少关注空间信息。因此,本研究的目标是通过整合多频段下的频率和空间域信息来提高情绪识别的性能。
首先,提取四个频段的脑电信号,然后分别计算微分熵(DE)、对称差(SD)和对称商(SQ)这三个频域-空间特征。其次,根据脑电电极的分布,针对每个频段用这三个频域-空间特征构建一系列脑图。第三,一个多并行输入卷积神经网络(MPICNN)利用构建的脑图进行训练并获得情绪识别模型。最后,在DEAP和SEED-IV数据集上进行受试者相关实验。
DEAP数据集的实验结果表明,四类情绪识别(即高愉悦度高唤醒度、高愉悦度低唤醒度、低愉悦度高唤醒度和低愉悦度低唤醒度)的平均准确率达到98.71%。SEED-IV数据集的结果表明,四类情绪识别(即快乐、悲伤、中性和恐惧)的平均准确率达到92.55%。
在这两个四类情绪识别数据集上,该方法与最先进的方法相比具有最佳的分类性能。
这种基于脑电图的情绪识别方法融合了多频段下的多频域-空间特征,与现有方法相比有效地提高了识别性能。