Bao Guangcheng, Zhuang Ning, Tong Li, Yan Bin, Shu Jun, Wang Linyuan, Zeng Ying, Shen Zhichong
Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China.
Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
Front Hum Neurosci. 2021 Jan 20;14:605246. doi: 10.3389/fnhum.2020.605246. eCollection 2020.
Emotion recognition plays an important part in human-computer interaction (HCI). Currently, the main challenge in electroencephalogram (EEG)-based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network (TDANN) to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are then passed through TDANN for two-level domain confusion. The first level uses the maximum mean discrepancy (MMD) to reduce the distribution discrepancy of deep features between source domain and target domain, and the second uses the domain adversarial neural network (DANN) to force the deep features closer to their corresponding class centers. We evaluated the domain-transfer performance of the model on both our self-built data set and the public data set SEED. In the cross-day transfer experiment, the ability to accurately discriminate joy from other emotions was high: sadness (84%), anger (87.04%), and fear (85.32%) on the self-built data set. The accuracy reached 74.93% on the SEED data set. In the cross-subject transfer experiment, the ability to accurately discriminate joy from other emotions was equally high: sadness (83.79%), anger (84.13%), and fear (81.72%) on the self-built data set. The average accuracy reached 87.9% on the SEED data set, which was higher than WGAN-DA. The experimental results demonstrate that the proposed TDANN can effectively handle the domain transfer problem in EEG-based emotion recognition.
情绪识别在人机交互(HCI)中起着重要作用。目前,基于脑电图(EEG)的情绪识别的主要挑战是EEG信号的非平稳性,这会导致训练模型的性能随时间下降。在本文中,我们提出了一种两级域自适应神经网络(TDANN),以构建基于EEG的情绪识别的迁移模型。具体来说,使用深度神经网络从拓扑图中提取保留EEG信号拓扑信息的深度特征。然后将这些特征通过TDANN进行两级域混淆。第一级使用最大均值差异(MMD)来减少源域和目标域之间深度特征的分布差异,第二级使用域对抗神经网络(DANN)来迫使深度特征更接近其相应的类中心。我们在自建数据集和公共数据集SEED上评估了该模型的域迁移性能。在跨日迁移实验中,准确区分喜悦与其他情绪的能力很高:在自建数据集上,悲伤(84%)、愤怒(87.04%)和恐惧(85.32%)。在SEED数据集上准确率达到74.93%。在跨主体迁移实验中,准确区分喜悦与其他情绪的能力同样很高:在自建数据集上,悲伤(83.79%)、愤怒(84.13%)和恐惧(81.72%)。在SEED数据集上平均准确率达到87.9%,高于WGAN-DA。实验结果表明,所提出的TDANN可以有效地处理基于EEG的情绪识别中的域迁移问题。