Suppr超能文献

虚拟现实环境中基于时空变换器的脑电图情感识别

A spatial and temporal transformer-based EEG emotion recognition in VR environment.

作者信息

Li Ming, Yu Peng, Shen Yang

机构信息

State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China.

Collaborative Innovation Center of Assessment for Basic Education Quality, Beijing Normal University, Beijing, China.

出版信息

Front Hum Neurosci. 2025 Feb 26;19:1517273. doi: 10.3389/fnhum.2025.1517273. eCollection 2025.

Abstract

With the rapid development of deep learning, Electroencephalograph(EEG) emotion recognition has played a significant role in affective brain-computer interfaces. Many advanced emotion recognition models have achieved excellent results. However, current research is mostly conducted in laboratory settings for emotion induction, which lacks sufficient ecological validity and differs significantly from real-world scenarios. Moreover, emotion recognition models are typically trained and tested on datasets collected in laboratory environments, with little validation of their effectiveness in real-world situations. VR, providing a highly immersive and realistic experience, is an ideal tool for emotional research. In this paper, we collect EEG data from participants while they watched VR videos. We propose a purely Transformer-based method, EmoSTT. We use two separate Transformer modules to comprehensively model the temporal and spatial information of EEG signals. We validate the effectiveness of EmoSTT on a passive paradigm collected in a laboratory environment and an active paradigm emotion dataset collected in a VR environment. Compared with state-of-the-art methods, our method achieves robust emotion classification performance and can be well transferred between different emotion elicitation paradigms.

摘要

随着深度学习的快速发展,脑电图(EEG)情感识别在情感脑机接口中发挥了重要作用。许多先进的情感识别模型都取得了优异的成果。然而,目前的研究大多是在实验室环境中进行情感诱导,缺乏足够的生态效度,与现实世界场景有很大差异。此外,情感识别模型通常在实验室环境中收集的数据集上进行训练和测试,很少验证其在现实世界中的有效性。虚拟现实(VR)提供了高度沉浸式和逼真的体验,是情感研究的理想工具。在本文中,我们在参与者观看VR视频时收集他们的EEG数据。我们提出了一种基于纯Transformer的方法EmoSTT。我们使用两个独立的Transformer模块对EEG信号的时间和空间信息进行全面建模。我们在实验室环境中收集的被动范式和VR环境中收集的主动范式情感数据集上验证了EmoSTT的有效性。与现有方法相比,我们的方法实现了稳健的情感分类性能,并且可以在不同的情感诱发范式之间很好地迁移。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ce6/11897567/fd197176ecec/fnhum-19-1517273-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验