Qian Ren, Xiong Xin, Zhou Jianhua, Yu Hongde, Sha Kaiwen
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
Brain Sci. 2024 Aug 15;14(8):817. doi: 10.3390/brainsci14080817.
In recent years, EEG-based emotion recognition technology has made progress, but there are still problems of low model efficiency and loss of emotional information, and there is still room for improvement in recognition accuracy. To fully utilize EEG's emotional information and improve recognition accuracy while reducing computational costs, this paper proposes a Convolutional-Recurrent Hybrid Network with a dual-stream adaptive approach and an attention mechanism (CSA-SA-CRTNN). Firstly, the model utilizes a CSAM module to assign corresponding weights to EEG channels. Then, an adaptive dual-stream convolutional-recurrent network (SA-CRNN and MHSA-CRNN) is applied to extract local spatial-temporal features. After that, the extracted local features are concatenated and fed into a temporal convolutional network with a multi-head self-attention mechanism (MHSA-TCN) to capture global information. Finally, the extracted EEG information is used for emotion classification. We conducted binary and ternary classification experiments on the DEAP dataset, achieving 99.26% and 99.15% accuracy for arousal and valence in binary classification and 97.69% and 98.05% in ternary classification, and on the SEED dataset, we achieved an accuracy of 98.63%, surpassing relevant algorithms. Additionally, the model's efficiency is significantly higher than other models, achieving better accuracy with lower resource consumption.
近年来,基于脑电图(EEG)的情感识别技术取得了进展,但仍存在模型效率低和情感信息丢失的问题,在识别准确率方面仍有提升空间。为了充分利用脑电图的情感信息,提高识别准确率,同时降低计算成本,本文提出了一种具有双流自适应方法和注意力机制的卷积循环混合网络(CSA-SA-CRTNN)。首先,该模型利用CSAM模块为脑电图通道分配相应权重。然后,应用自适应双流卷积循环网络(SA-CRNN和MHSA-CRNN)提取局部时空特征。之后,将提取的局部特征进行拼接,并输入到具有多头自注意力机制(MHSA-TCN)的时间卷积网络中以捕获全局信息。最后,将提取的脑电图信息用于情感分类。我们在DEAP数据集上进行了二元和三元分类实验,在二元分类中,唤醒度和效价的准确率分别达到99.26%和99.15%,在三元分类中分别达到97.69%和98.05%,在SEED数据集上,我们达到了98.63%的准确率,超过了相关算法。此外,该模型的效率明显高于其他模型,以更低的资源消耗实现了更高的准确率。