Ma Yahong, Huang Zhentao, Yang Yuyao, Zhang Shanwen, Dong Qi, Wang Rongrong, Hu Liangliang
Xi'an Key Laboratory of High Pricision Industrial Intelligent Vision Measurement Technology, School of Electronic Information, Xijing University, Xi'an 710123, China.
Affiliation College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
Brain Sci. 2024 Dec 21;14(12):1289. doi: 10.3390/brainsci14121289.
Emotions play a crucial role in people's lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction.
Addressing the issue of inaccurate feature extraction and low accuracy of existing deep learning models in emotion recognition, this paper proposes a multi-channel automatic classification model for emotion EEG signals named DACB, which is based on dual attention mechanisms, convolutional neural networks, and bidirectional long short-term memory networks. DACB extracts features in both temporal and spatial dimensions, incorporating not only convolutional neural networks but also SE attention mechanism modules for learning the importance of different channel features, thereby enhancing the network's performance. DACB also introduces dot product attention mechanisms to learn the importance of spatial and temporal features, effectively improving the model's accuracy.
The accuracy of this method in single-shot validation tests on the SEED-IV and DREAMER (Valence-Arousal-Dominance three-classification) datasets is 99.96% and 87.52%, 90.06%, and 89.05%, respectively. In 10-fold cross-validation tests, the accuracy is 99.73% and 84.26%, 85.40%, and 85.02%, outperforming other models.
This demonstrates that the DACB model achieves high accuracy in emotion classification tasks, demonstrating outstanding performance and generalization ability and providing new directions for future research in EEG signal recognition.
情绪在人们的生活中起着至关重要的作用,深刻影响着他们的认知、决策和人际沟通。基于脑信号的情绪识别已成为情感计算和人机交互领域的一项重大挑战。
针对现有深度学习模型在情绪识别中特征提取不准确和准确率低的问题,本文提出了一种用于情绪脑电信号的多通道自动分类模型DACB,该模型基于双重注意力机制、卷积神经网络和双向长短期记忆网络。DACB在时间和空间维度上提取特征,不仅结合了卷积神经网络,还引入了SE注意力机制模块来学习不同通道特征的重要性,从而提高网络性能。DACB还引入了点积注意力机制来学习时空特征的重要性,有效提高了模型的准确率。
该方法在SEED-IV和DREAMER(效价-唤醒-优势三维分类)数据集上的单次验证测试中的准确率分别为99.96%和87.52%、90.06%和89.05%。在10折交叉验证测试中,准确率分别为99.73%和84.26%、85.40%和85.02%,优于其他模型。
这表明DACB模型在情绪分类任务中实现了高精度,展现出卓越的性能和泛化能力,为未来脑电信号识别研究提供了新的方向。