Bao Guangcheng, Yan Bin, Tong Li, Shu Jun, Wang Linyuan, Yang Kai, Zeng Ying
Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China.
Key Laboratory for NeuroInformation of Ministry of Education, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China.
Front Comput Neurosci. 2021 Dec 9;15:723843. doi: 10.3389/fncom.2021.723843. eCollection 2021.
One of the greatest limitations in the field of EEG-based emotion recognition is the lack of training samples, which makes it difficult to establish effective models for emotion recognition. Inspired by the excellent achievements of generative models in image processing, we propose a data augmentation model named VAE-D2GAN for EEG-based emotion recognition using a generative adversarial network. EEG features representing different emotions are extracted as topological maps of differential entropy (DE) under five classical frequency bands. The proposed model is designed to learn the distributions of these features for real EEG signals and generate artificial samples for training. The variational auto-encoder (VAE) architecture can learn the spatial distribution of the actual data through a latent vector, and is introduced into the dual discriminator GAN to improve the diversity of the generated artificial samples. To evaluate the performance of this model, we conduct a systematic test on two public emotion EEG datasets, the SEED and the SEED-IV. The obtained recognition accuracy of the method using data augmentation shows as 92.5 and 82.3%, respectively, on the SEED and SEED-IV datasets, which is 1.5 and 3.5% higher than that of methods without using data augmentation. The experimental results show that the artificial samples generated by our model can effectively enhance the performance of the EEG-based emotion recognition.
基于脑电图(EEG)的情感识别领域最大的局限性之一是缺乏训练样本,这使得难以建立有效的情感识别模型。受生成模型在图像处理方面取得的卓越成就启发,我们提出了一种名为VAE-D2GAN的数据增强模型,用于基于EEG的情感识别,该模型使用了生成对抗网络。在五个经典频段下,将代表不同情感的EEG特征提取为微分熵(DE)的拓扑图。所提出的模型旨在学习这些真实EEG信号特征的分布,并生成人工样本用于训练。变分自编码器(VAE)架构可以通过一个潜在向量学习实际数据的空间分布,并被引入到双判别器GAN中,以提高生成的人工样本的多样性。为了评估该模型的性能,我们在两个公开的情感EEG数据集SEED和SEED-IV上进行了系统测试。在SEED和SEED-IV数据集上,使用数据增强方法获得的识别准确率分别为92.5%和82.3%,比未使用数据增强的方法高出1.5%和3.5%。实验结果表明,我们模型生成的人工样本能够有效地提高基于EEG的情感识别性能。