Shen Fangyao, Dai Guojun, Lin Guang, Zhang Jianhai, Kong Wanzeng, Zeng Hong
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China.
Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, China.
Cogn Neurodyn. 2020 Dec;14(6):815-828. doi: 10.1007/s11571-020-09634-1. Epub 2020 Sep 14.
In this paper, we present a novel method, called four-dimensional convolutional recurrent neural network, which integrating frequency, spatial and temporal information of multichannel EEG signals explicitly to improve EEG-based emotion recognition accuracy. First, to maintain these three kinds of information of EEG, we transform the differential entropy features from different channels into 4D structures to train the deep model. Then, we introduce CRNN model, which is combined by convolutional neural network (CNN) and recurrent neural network with long short term memory (LSTM) cell. CNN is used to learn frequency and spatial information from each temporal slice of 4D inputs, and LSTM is used to extract temporal dependence from CNN outputs. The output of the last node of LSTM performs classification. Our model achieves state-of-the-art performance both on SEED and DEAP datasets under intra-subject splitting. The experimental results demonstrate the effectiveness of integrating frequency, spatial and temporal information of EEG for emotion recognition.
在本文中,我们提出了一种名为四维卷积递归神经网络的新方法,该方法明确整合多通道脑电信号的频率、空间和时间信息,以提高基于脑电的情绪识别准确率。首先,为了保留脑电的这三种信息,我们将来自不同通道的微分熵特征转换为四维结构,以训练深度模型。然后,我们引入了由卷积神经网络(CNN)和带有长短期记忆(LSTM)单元的递归神经网络相结合的CRNN模型。CNN用于从四维输入的每个时间切片中学习频率和空间信息,而LSTM用于从CNN输出中提取时间依赖性。LSTM最后一个节点的输出进行分类。在受试者内部分割的情况下,我们的模型在SEED和DEAP数据集上均取得了领先的性能。实验结果证明了整合脑电的频率、空间和时间信息用于情绪识别的有效性。