Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China.
College of Electronic and Information Engineering, Hebei University, Baoding 071000, China.
Math Biosci Eng. 2023 Dec 5;20(12):21537-21562. doi: 10.3934/mbe.2023953.
In recent years, with the continuous development of artificial intelligence and brain-computer interfaces, emotion recognition based on electroencephalogram (EEG) signals has become a prosperous research direction. Due to saliency in brain cognition, we construct a new spatio-temporal convolutional attention network for emotion recognition named BiTCAN. First, in the proposed method, the original EEG signals are de-baselined, and the two-dimensional mapping matrix sequence of EEG signals is constructed by combining the electrode position. Second, on the basis of the two-dimensional mapping matrix sequence, the features of saliency in brain cognition are extracted by using the Bi-hemisphere discrepancy module, and the spatio-temporal features of EEG signals are captured by using the 3-D convolution module. Finally, the saliency features and spatio-temporal features are fused into the attention module to further obtain the internal spatial relationships between brain regions, and which are input into the classifier for emotion recognition. Many experiments on DEAP and SEED (two public datasets) show that the accuracies of the proposed algorithm on both are higher than 97%, which is superior to most existing emotion recognition algorithms.
近年来,随着人工智能和脑机接口技术的不断发展,基于脑电(EEG)信号的情感识别已成为一个热门的研究方向。由于大脑认知的显著特征,我们构建了一种新的基于时空卷积注意力网络的情感识别方法,命名为 BiTCAN。首先,在提出的方法中,对原始 EEG 信号进行去基线处理,并通过电极位置组合构建 EEG 信号的二维映射矩阵序列。其次,基于二维映射矩阵序列,利用双半球差异模块提取大脑认知中的显著特征,并使用 3D 卷积模块捕获 EEG 信号的时空特征。最后,将显著特征和时空特征融合到注意力模块中,进一步获取脑区之间的内部空间关系,并将其输入到分类器中进行情感识别。在 DEAP 和 SEED(两个公共数据集)上进行的大量实验表明,该算法在这两个数据集上的准确率均高于 97%,优于大多数现有的情感识别算法。