College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China.
Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China.
Sensors (Basel). 2023 Jan 26;23(3):1404. doi: 10.3390/s23031404.
Various relations existing in Electroencephalogram (EEG) data are significant for EEG feature representation. Thus, studies on the graph-based method focus on extracting relevancy between EEG channels. The shortcoming of existing graph studies is that they only consider a single relationship of EEG electrodes, which results an incomprehensive representation of EEG data and relatively low accuracy of emotion recognition. In this paper, we propose a fusion graph convolutional network (FGCN) to extract various relations existing in EEG data and fuse these extracted relations to represent EEG data more comprehensively for emotion recognition. First, the FGCN mines brain connection features on topology, causality, and function. Then, we propose a local fusion strategy to fuse these three graphs to fully utilize the valuable channels with strong topological, causal, and functional relations. Finally, the graph convolutional neural network is adopted to represent EEG data for emotion recognition better. Experiments on SEED and SEED-IV demonstrate that fusing different relation graphs are effective for improving the ability in emotion recognition. Furthermore, the emotion recognition accuracy of 3-class and 4-class is higher than that of other state-of-the-art methods.
脑电信号(EEG)数据中存在着各种关系,这些关系对于 EEG 特征表示具有重要意义。因此,基于图的方法研究集中在提取 EEG 通道之间的相关性。现有图研究的缺点是,它们只考虑 EEG 电极的单一关系,这导致 EEG 数据的表示不全面,情绪识别的准确性相对较低。在本文中,我们提出了一种融合图卷积网络(FGCN)来提取 EEG 数据中存在的各种关系,并融合这些提取的关系,以更全面地表示 EEG 数据,从而进行情绪识别。首先,FGCN 在拓扑、因果和功能上挖掘大脑连接特征。然后,我们提出了一种局部融合策略,融合这三个图,以充分利用具有强拓扑、因果和功能关系的有价值的通道。最后,采用图卷积神经网络更好地表示 EEG 数据,以进行情绪识别。在 SEED 和 SEED-IV 上的实验表明,融合不同的关系图对于提高情绪识别能力是有效的。此外,3 类和 4 类的情绪识别准确率高于其他最新方法。