Suppr超能文献

一种用于跨个体和个体内脑电图情感识别的时态-频谱图卷积神经网络模型。

A temporal-spectral graph convolutional neural network model for EEG emotion recognition within and across subjects.

作者信息

Li Rui, Yang Xuanwen, Lou Jun, Zhang Junsong

机构信息

Brain Cognition and Computing Lab, National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan, Hubei, China.

Brain Cognition and Intelligent Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, Xiamen, Fujian, China.

出版信息

Brain Inform. 2024 Dec 18;11(1):30. doi: 10.1186/s40708-024-00242-x.

Abstract

EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.

摘要

基于脑电图(EEG)的情绪识别利用神经活动的高级信息来预测受试者的情绪反应。然而,这些信息在频率、时间和空间域中分布稀疏,并且因受试者而异。为了应对情绪识别中的这些挑战,我们提出了一种名为时间-频谱图卷积网络(TSGCN)的新型神经网络模型。为了捕捉分布在时间、空间和频率域中的高级信息,TSGCN既考虑了不同时间窗口中的神经振荡变化,也考虑了不同脑区之间的拓扑结构。具体而言,TSGCN中使用了最小类别混淆(MCC)损失来减少主观评分与预定义标签之间的不一致性。此外,为了提高TSGCN在跨受试者差异上的泛化能力,我们提出了深度和浅层特征动态对抗学习(DSDAL)来计算源域和目标域之间的距离。在公共数据集上进行了广泛的实验,以证明TSGCN在基于EEG的情绪识别中优于现有方法。消融研究表明,TSGCN中的混合神经网络和我们提出的方法对其高性能和鲁棒性有显著贡献。详细研究进一步证明了TSGCN在应对情绪识别挑战方面的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c44a/11655824/989ee5d48221/40708_2024_242_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验