Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin 300350, People's Republic of China.
J Neural Eng. 2023 Feb 14;20(1). doi: 10.1088/1741-2552/acb79e.
Constructing an efficient human emotion recognition model based on electroencephalogram (EEG) signals is significant for realizing emotional brain-computer interaction and improving machine intelligence.In this paper, we present a spatial-temporal feature fused convolutional graph attention network (STFCGAT) model based on multi-channel EEG signals for human emotion recognition. First, we combined the single-channel differential entropy (DE) feature with the cross-channel functional connectivity (FC) feature to extract both the temporal variation and spatial topological information of EEG. After that, a novel convolutional graph attention network was used to fuse the DE and FC features and further extract higher-level graph structural information with sufficient expressive power for emotion recognition. Furthermore, we introduced a multi-headed attention mechanism in graph neural networks to improve the generalization ability of the model.We evaluated the emotion recognition performance of our proposed model on the public SEED and DEAP datasets, which achieved a classification accuracy of 99.11% ± 0.83% and 94.83% ± 3.41% in the subject-dependent and subject-independent experiments on the SEED dataset, and achieved an accuracy of 91.19% ± 1.24% and 92.03% ± 4.57% for discrimination of arousal and valence in subject-independent experiments on DEAP dataset. Notably, our model achieved state-of-the-art performance on cross-subject emotion recognition tasks for both datasets. In addition, we gained insight into the proposed frame through both the ablation experiments and the analysis of spatial patterns of FC and DE features.All these results prove the effectiveness of the STFCGAT architecture for emotion recognition and also indicate that there are significant differences in the spatial-temporal characteristics of the brain under different emotional states.
基于脑电 (EEG) 信号构建高效的人类情感识别模型对于实现情感脑机交互和提高机器智能具有重要意义。在本文中,我们提出了一种基于多通道 EEG 信号的时空特征融合卷积图注意网络 (STFCGAT) 模型,用于人类情感识别。首先,我们将单通道差分熵 (DE) 特征与跨通道功能连接 (FC) 特征相结合,以提取 EEG 的时间变化和空间拓扑信息。之后,使用新的卷积图注意网络融合 DE 和 FC 特征,并进一步提取具有充分表现力的更高层次的图结构信息,以实现情感识别。此外,我们在图神经网络中引入了多头注意力机制,以提高模型的泛化能力。我们在公共 SEED 和 DEAP 数据集上评估了我们提出的模型的情感识别性能,在 SEED 数据集的基于主体和独立于主体的实验中,分类准确率分别为 99.11%±0.83%和 94.83%±3.41%,在 DEAP 数据集的独立于主体的实验中, arousal 和 valence 的判别准确率分别为 91.19%±1.24%和 92.03%±4.57%。值得注意的是,我们的模型在两个数据集的跨主体情感识别任务中均取得了最先进的性能。此外,我们通过消融实验和 FC 和 DE 特征的空间模式分析深入了解了所提出的框架。所有这些结果都证明了 STFCGAT 架构在情感识别方面的有效性,并且还表明在不同的情绪状态下,大脑的时空特征存在显著差异。