Bao Guangcheng, Yang Kai, Tong Li, Shu Jun, Zhang Rongkai, Wang Linyuan, Yan Bin, Zeng Ying
Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China.
Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
Front Neurorobot. 2022 Feb 24;16:834952. doi: 10.3389/fnbot.2022.834952. eCollection 2022.
Electroencephalography (EEG)-based emotion computing has become one of the research hotspots of human-computer interaction (HCI). However, it is difficult to effectively learn the interactions between brain regions in emotional states by using traditional convolutional neural networks because there is information transmission between neurons, which constitutes the brain network structure. In this paper, we proposed a novel model combining graph convolutional network and convolutional neural network, namely MDGCN-SRCNN, aiming to fully extract features of channel connectivity in different receptive fields and deep layer abstract features to distinguish different emotions. Particularly, we add style-based recalibration module to CNN to extract deep layer features, which can better select features that are highly related to emotion. We conducted two individual experiments on SEED data set and SEED-IV data set, respectively, and the experiments proved the effectiveness of MDGCN-SRCNN model. The recognition accuracy on SEED and SEED-IV is 95.08 and 85.52%, respectively. Our model has better performance than other state-of-art methods. In addition, by visualizing the distribution of different layers features, we prove that the combination of shallow layer and deep layer features can effectively improve the recognition performance. Finally, we verified the important brain regions and the connection relationships between channels for emotion generation by analyzing the connection weights between channels after model learning.
基于脑电图(EEG)的情感计算已成为人机交互(HCI)的研究热点之一。然而,由于神经元之间存在构成脑网络结构的信息传递,使用传统卷积神经网络难以有效学习情绪状态下脑区之间的相互作用。在本文中,我们提出了一种结合图卷积网络和卷积神经网络的新型模型,即MDGCN-SRCNN,旨在充分提取不同感受野中通道连通性的特征以及深层抽象特征以区分不同情绪。特别地,我们在卷积神经网络中添加基于风格的重新校准模块来提取深层特征,其能够更好地选择与情绪高度相关的特征。我们分别在SEED数据集和SEED-IV数据集上进行了两项个体实验,实验证明了MDGCN-SRCNN模型的有效性。在SEED和SEED-IV上的识别准确率分别为95.08%和85.52%。我们的模型比其他现有方法具有更好的性能。此外,通过可视化不同层特征的分布,我们证明浅层和深层特征的结合可以有效提高识别性能。最后,我们通过分析模型学习后通道之间的连接权重,验证了产生情绪的重要脑区以及通道之间的连接关系。