Chen Guijun, Liu Yue, Zhang Xueying
College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan 030024, China.
Brain Sci. 2024 Aug 16;14(8):820. doi: 10.3390/brainsci14080820.
Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively reflect a person's emotional state and have been widely studied in emotion recognition. However, the effective feature fusion and discriminative feature learning from EEG-fNIRS data is challenging. In order to improve the accuracy of emotion recognition, a graph convolution and capsule attention network model (GCN-CA-CapsNet) is proposed. Firstly, EEG-fNIRS signals are collected from 50 subjects induced by emotional video clips. And then, the features of the EEG and fNIRS are extracted; the EEG-fNIRS features are fused to generate higher-quality primary capsules by graph convolution with the Pearson correlation adjacency matrix. Finally, the capsule attention module is introduced to assign different weights to the primary capsules, and higher-quality primary capsules are selected to generate better classification capsules in the dynamic routing mechanism. We validate the efficacy of the proposed method on our emotional EEG-fNIRS dataset with an ablation study. Extensive experiments demonstrate that the proposed GCN-CA-CapsNet method achieves a more satisfactory performance against the state-of-the-art methods, and the average accuracy can increase by 3-11%.
脑电图(EEG)和功能近红外光谱(fNIRS)能够客观反映人的情绪状态,且在情绪识别领域已得到广泛研究。然而,从EEG-fNIRS数据中进行有效的特征融合和判别性特征学习具有挑战性。为提高情绪识别的准确率,提出了一种图卷积与胶囊注意力网络模型(GCN-CA-CapsNet)。首先,从50名受情绪视频片段诱发的受试者中采集EEG-fNIRS信号。然后,提取EEG和fNIRS的特征;通过与皮尔逊相关邻接矩阵进行图卷积,融合EEG-fNIRS特征以生成更高质量的初级胶囊。最后,引入胶囊注意力模块为初级胶囊分配不同权重,并在动态路由机制中选择更高质量的初级胶囊以生成更好的分类胶囊。我们通过消融研究在我们的情绪EEG-fNIRS数据集上验证了所提方法的有效性。大量实验表明,所提的GCN-CA-CapsNet方法相对于现有方法取得了更令人满意的性能,平均准确率可提高3% - 11%。