Cao Lei, Yu Binlong, Dong Yilin, Liu Tianyu, Li Jie
School of Information Engineering, Shanghai Maritime University, Shanghai 201306, People's Republic of China.
School of Electronic and Information Engineering, TongJi University, Shanghai 200092, People's Republic of China.
Physiol Meas. 2024 Dec 5;45(12). doi: 10.1088/1361-6579/ad9661.
In recent years, emotion recognition using electroencephalogram (EEG) signals has garnered significant interest due to its non-invasive nature and high temporal resolution. We introduced a groundbreaking method that bypasses traditional manual feature engineering, emphasizing data preprocessing and leveraging the topological relationships between channels to transform EEG signals from two-dimensional time sequences into three-dimensional spatio-temporal representations. Maximizing the potential of deep learning, our approach provides a data-driven and robust method for identifying emotional states. Leveraging the synergy between convolutional neural network and attention mechanisms facilitated automatic feature extraction and dynamic learning of inter-channel dependencies. Our method showcased remarkable performance in emotion recognition tasks, confirming the effectiveness of our approach, achieving average accuracy of 98.62% for arousal and 98.47% for valence, surpassing previous state-of-the-art results of 95.76% and 95.15%. Furthermore, we conducted a series of pivotal experiments that broadened the scope of emotion recognition research, exploring further possibilities in the field of emotion recognition.
近年来,利用脑电图(EEG)信号进行情感识别因其非侵入性和高时间分辨率而备受关注。我们引入了一种开创性的方法,绕过传统的手动特征工程,强调数据预处理,并利用通道之间的拓扑关系将EEG信号从二维时间序列转换为三维时空表示。我们的方法充分发挥深度学习的潜力,提供了一种数据驱动且强大的情感状态识别方法。利用卷积神经网络和注意力机制之间的协同作用,实现了自动特征提取和通道间依赖关系的动态学习。我们的方法在情感识别任务中表现出色,证实了我们方法的有效性,在唤醒度上平均准确率达到98.62%,效价上达到98.47%,超过了之前95.76%和95.15%的最先进结果。此外,我们进行了一系列关键实验,拓宽了情感识别研究的范围,探索了情感识别领域的更多可能性。