School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China.
Sensors (Basel). 2019 May 13;19(9):2212. doi: 10.3390/s19092212.
Emotion recognition based on multi-channel electroencephalograph (EEG) signals is becoming increasingly attractive. However, the conventional methods ignore the spatial characteristics of EEG signals, which also contain salient information related to emotion states. In this paper, a deep learning framework based on a multiband feature matrix (MFM) and a capsule network (CapsNet) is proposed. In the framework, the frequency domain, spatial characteristics, and frequency band characteristics of the multi-channel EEG signals are combined to construct the MFM. Then, the CapsNet model is introduced to recognize emotion states according to the input MFM. Experiments conducted on the dataset for emotion analysis using EEG, physiological, and video signals (DEAP) indicate that the proposed method outperforms most of the common models. The experimental results demonstrate that the three characteristics contained in the MFM were complementary and the capsule network was more suitable for mining and utilizing the three correlation characteristics.
基于多通道脑电图 (EEG) 信号的情绪识别越来越受到关注。然而,传统方法忽略了 EEG 信号的空间特征,这些特征也包含与情绪状态相关的显著信息。本文提出了一种基于多频带特征矩阵 (MFM) 和胶囊网络 (CapsNet) 的深度学习框架。在该框架中,结合多通道 EEG 信号的频域、空间特征和频带特征来构建 MFM。然后,根据输入的 MFM,引入 CapsNet 模型来识别情绪状态。在使用 EEG、生理和视频信号进行情绪分析的数据集 (DEAP) 上进行的实验表明,所提出的方法优于大多数常见模型。实验结果表明,MFM 中包含的三个特征是互补的,胶囊网络更适合挖掘和利用这三个相关特征。