Suppr超能文献

基于双门控循环单元时空表征的脑电图情感识别

EEG-based Emotion Recognition Using Spatial-Temporal Representation via Bi-GRU.

作者信息

Lew Wai-Cheong Lincoln, Wang Di, Shylouskaya Katsiaryna, Zhang Zhuo, Lim Joo-Hwee, Ang Kai Keng, Tan Ah-Hwee

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:116-119. doi: 10.1109/EMBC44109.2020.9176682.

Abstract

Many prior studies on EEG-based emotion recognition did not consider the spatial-temporal relationships among brain regions and across time. In this paper, we propose a Regionally-Operated Domain Adversarial Network (RODAN), to learn spatial-temporal relationships that correlate between brain regions and time. Moreover, we incorporate the attention mechanism to enable cross-domain learning to capture both spatial-temporal relationships among the EEG electrodes and an adversarial mechanism to reduce the domain shift in EEG signals. To evaluate the performance of RODAN, we conduct subject-dependent, subject-independent, and subject-biased experiments on both DEAP and SEED-IV data sets, which yield encouraging results. In addition, we also discuss the biased sampling issue often observed in EEG-based emotion recognition and present an unbiased benchmark for both DEAP and SEED-IV.

摘要

许多先前基于脑电图的情绪识别研究没有考虑大脑区域之间以及跨时间的时空关系。在本文中,我们提出了一种区域操作域对抗网络(RODAN),以学习大脑区域与时间之间相关的时空关系。此外,我们纳入了注意力机制以实现跨域学习,从而捕捉脑电图电极之间的时空关系,并引入对抗机制以减少脑电图信号中的域偏移。为了评估RODAN的性能,我们在DEAP和SEED-IV数据集上进行了依赖于受试者、独立于受试者和偏向于受试者的实验,这些实验取得了令人鼓舞的结果。此外,我们还讨论了在基于脑电图的情绪识别中经常观察到的有偏采样问题,并为DEAP和SEED-IV提供了一个无偏基准。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验