Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
Comput Biol Med. 2020 Aug;123:103927. doi: 10.1016/j.compbiomed.2020.103927. Epub 2020 Jul 22.
In recent years, deep learning (DL) techniques, and in particular convolutional neural networks (CNNs), have shown great potential in electroencephalograph (EEG)-based emotion recognition. However, existing CNN-based EEG emotion recognition methods usually require a relatively complex stage of feature pre-extraction. More importantly, the CNNs cannot well characterize the intrinsic relationship among the different channels of EEG signals, which is essentially a crucial clue for the recognition of emotion. In this paper, we propose an effective multi-level features guided capsule network (MLF-CapsNet) for multi-channel EEG-based emotion recognition to overcome these issues. The MLF-CapsNet is an end-to-end framework, which can simultaneously extract features from the raw EEG signals and determine the emotional states. Compared with original CapsNet, it incorporates multi-level feature maps learned by different layers in forming the primary capsules so that the capability of feature representation can be enhanced. In addition, it uses a bottleneck layer to reduce the amount of parameters and accelerate the speed of calculation. Our method achieves the average accuracy of 97.97%, 98.31% and 98.32% on valence, arousal and dominance of DEAP dataset, respectively, and 94.59%, 95.26% and 95.13% on valence, arousal and dominance of DREAMER dataset, respectively. These results show that our method exhibits higher accuracy than the state-of-the-art methods.
近年来,深度学习(DL)技术,特别是卷积神经网络(CNNs),在基于脑电图(EEG)的情绪识别中显示出了巨大的潜力。然而,现有的基于 CNN 的 EEG 情绪识别方法通常需要一个相对复杂的特征预提取阶段。更重要的是,CNN 不能很好地描述 EEG 信号不同通道之间的内在关系,这对于情绪识别来说是一个至关重要的线索。在本文中,我们提出了一种有效的基于多尺度特征引导胶囊网络(MLF-CapsNet)的多通道 EEG 情绪识别方法,以克服这些问题。MLF-CapsNet 是一个端到端的框架,它可以同时从原始 EEG 信号中提取特征,并确定情绪状态。与原始 CapsNet 相比,它在形成初级胶囊时融合了不同层学习到的多尺度特征图,从而增强了特征表示的能力。此外,它还使用了瓶颈层来减少参数数量并加快计算速度。我们的方法在 DEAP 数据集的效价、唤醒度和主导度上的平均准确率分别为 97.97%、98.31%和 98.32%,在 DREAMER 数据集的效价、唤醒度和主导度上的平均准确率分别为 94.59%、95.26%和 95.13%。这些结果表明,我们的方法比现有的方法具有更高的准确率。