Gao Dongrui, Zheng Qingyuan, Li Pengrui, Wang Manqing
School of Computer Science, Chengdu University of Information Technology, Chengdu, China.
Comput Methods Biomech Biomed Engin. 2025 Apr 2:1-13. doi: 10.1080/10255842.2025.2484557.
Electroencephalogram (EEG)-based emotion recognition is a reliable and deployable method for identifying human emotional states. Currently, Graph convolution networks (GCN) have exhibited superior performance in extracting topological features of EEG. However, how to capture the dynamic topological relationship is still a challenge. In this paper, we propose an adaptive GCN with residual attention (AGC-RSTA) to extract the spatio-temporal discriminative features. Firstly, we construct an adaptive adjacency matrix in graph convolution, extracting the dynamic spatial topological features. We then utilize the residual spatio-temporal attention module to capture deep spatio-temporal features. Ablation studies and comparative experiments on the SEED and SEED-IV datasets demonstrate that our proposed model outperforms state-of-the-art methods, achieving recognition accuracies of 94.91% and 91.17%, respectively.
基于脑电图(EEG)的情绪识别是一种用于识别人类情绪状态的可靠且可部署的方法。目前,图卷积网络(GCN)在提取脑电图的拓扑特征方面表现出卓越的性能。然而,如何捕捉动态拓扑关系仍然是一个挑战。在本文中,我们提出了一种具有残差注意力的自适应GCN(AGC-RSTA)来提取时空判别特征。首先,我们在图卷积中构建一个自适应邻接矩阵,提取动态空间拓扑特征。然后,我们利用残差时空注意力模块来捕捉深度时空特征。在SEED和SEED-IV数据集上的消融研究和对比实验表明,我们提出的模型优于现有方法,分别实现了94.91%和91.17%的识别准确率。