Chen Wei, Liao Yuan, Dai Rui, Dong Yuanlin, Huang Liya
College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing, China.
Front Comput Neurosci. 2024 Jul 19;18:1416494. doi: 10.3389/fncom.2024.1416494. eCollection 2024.
EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments' accuracy of 99.42% and subject-independent experiments' accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.
基于脑电图(EEG)的情绪识别在脑机接口(BCI)中变得至关重要。目前,大多数研究都集中在提高准确率上,而忽视了对模型可解释性的进一步研究。我们致力于基于图结构分析不同脑区和信号频段对情绪产生的影响。因此,本文提出了一种名为双注意力机制图卷积神经网络(DAMGCN)的方法。具体而言,我们利用图卷积神经网络将脑网络建模为一个图,以提取具有代表性的空间特征。此外,我们采用Transformer模型的自注意力机制,为重要的脑区和频段分配更多的电极通道权重和信号频段权重。注意力机制的可视化清晰地展示了DAMGCN学习到的权重分配。在我们的模型对DEAP、SEED和SEED-IV数据集进行性能评估时,我们在SEED数据集上取得了最佳结果,在基于受试者的实验中准确率达到99.42%,在不依赖受试者的实验中准确率达到73.21%。这些结果明显优于基于EEG的情绪识别领域中大多数现有模型的准确率。