Suppr超能文献

GC-STCL:一种基于格兰杰因果关系的脑电图情感识别时空对比学习框架

GC-STCL: A Granger Causality-Based Spatial-Temporal Contrastive Learning Framework for EEG Emotion Recognition.

作者信息

Wang Lei, Wang Siming, Jin Bo, Wei Xiaopeng

机构信息

School of Software Technology, Dalian University of Technology, Dalian 116024, China.

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.

出版信息

Entropy (Basel). 2024 Jun 24;26(7):540. doi: 10.3390/e26070540.

Abstract

EEG signals capture information through multi-channel electrodes and hold promising prospects for human emotion recognition. However, the presence of high levels of noise and the diverse nature of EEG signals pose significant challenges, leading to potential overfitting issues that further complicate the extraction of meaningful information. To address this issue, we propose a Granger causal-based spatial-temporal contrastive learning framework, which significantly enhances the ability to capture EEG signal information by modeling rich spatial-temporal relationships. Specifically, in the spatial dimension, we employ a sampling strategy to select positive sample pairs from individuals watching the same video. Subsequently, a Granger causality test is utilized to enhance graph data and construct potential causality for each channel. Finally, a residual graph convolutional neural network is employed to extract features from EEG signals and compute spatial contrast loss. In the temporal dimension, we first apply a frequency domain noise reduction module for data enhancement on each time series. Then, we introduce the Granger-Former model to capture time domain representation and calculate the time contrast loss. We conduct extensive experiments on two publicly available sentiment recognition datasets (DEAP and SEED), achieving 1.65% improvement of the DEAP dataset and 1.55% improvement of the SEED dataset compared to state-of-the-art unsupervised models. Our method outperforms benchmark methods in terms of prediction accuracy as well as interpretability.

摘要

脑电图(EEG)信号通过多通道电极捕捉信息,在人类情绪识别方面具有广阔前景。然而,高水平噪声的存在以及EEG信号的多样性带来了重大挑战,导致潜在的过拟合问题,进一步使有意义信息的提取复杂化。为解决这一问题,我们提出了一种基于格兰杰因果关系的时空对比学习框架,通过对丰富的时空关系进行建模,显著增强了捕捉EEG信号信息的能力。具体而言,在空间维度上,我们采用一种采样策略,从观看同一视频的个体中选择正样本对。随后,利用格兰杰因果关系检验来增强图数据,并为每个通道构建潜在因果关系。最后,采用残差图卷积神经网络从EEG信号中提取特征并计算空间对比损失。在时间维度上,我们首先对每个时间序列应用频域降噪模块进行数据增强。然后,引入格兰杰前馈模型来捕捉时域表示并计算时间对比损失。我们在两个公开可用的情感识别数据集(DEAP和SEED)上进行了广泛实验,与最先进的无监督模型相比,DEAP数据集的准确率提高了1.65%,SEED数据集的准确率提高了1.55%。我们的方法在预测准确率和可解释性方面均优于基准方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0007/11275820/db83e63be5eb/entropy-26-00540-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验