Suppr超能文献

基于空间频率-时间卷积循环网络的嗅觉增强脑电情感识别

Spatial-frequency-temporal convolutional recurrent network for olfactory-enhanced EEG emotion recognition.

机构信息

Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei 230601, China; Zhejiang Key Laboratory for Brain-Machine Collaborative Intelligence, Hangzhou Dianzi University, Hangzhou 310018, China.

Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei 230601, China.

出版信息

J Neurosci Methods. 2022 Jul 1;376:109624. doi: 10.1016/j.jneumeth.2022.109624. Epub 2022 May 16.

Abstract

BACKGROUND

Multimedia stimulation of brain activity is important for emotion induction. Based on brain activity, emotion recognition using EEG signals has become a hot issue in the field of affective computing.

NEW METHOD

In this paper, we develop a noval odor-video elicited physiological signal database (OVPD), in which we collect the EEG signals from eight participants in positive, neutral and negative emotional states when they are stimulated by synchronizing traditional video content with the odors. To make full use of the EEG features from different domains, we design a 3DCNN-BiLSTM model combining convolutional neural network (CNN) and bidirectional long short term memory (BiLSTM) for EEG emotion recognition. First, we transform EEG signals into 4D representations that retain spatial, frequency and temporal information. Then, the representations are fed into the 3DCNN-BiLSTM model to recognize emotions. CNN is applied to learn spatial and frequency information from the 4D representations. BiLSTM is designed to extract forward and backward temporal dependences.

RESULTS

We conduct 5-fold cross validation experiments five times on the OVPD dataset to evaluate the performance of the model. The experimental results show that our presented model achieves an average accuracy of 98.29% with the standard deviation of 0.72% under the olfactory-enhanced video stimuli, and an average accuracy of 98.03% with the standard deviation of 0.73% under the traditional video stimuli on the OVPD dataset in the three-class classification of positive, neutral and negative emotions. To verify the generalisability of our proposed model, we also evaluate this approach on the public EEG emotion dataset (SEED).

COMPARISON WITH EXISTING METHOD

Compared with other baseline methods, our designed model achieves better recognition performance on the OVPD dataset. The average accuracy of positive, neutral and negative emotions is better in response to the olfactory-enhanced videos than the pure videos for the 3DCNN-BiLSTM model and other baseline methods.

CONCLUSION

The proposed 3DCNN-BiLSTM model is effective by fusing the spatial-frequency-temporal features of EEG signals for emotion recognition. The provided olfactory stimuli can induce stronger emotions than traditional video stimuli and improve the accuracy of emotion recognition to a certain extent. However, superimposing odors unrelated to the video scenes may distract participants' attention, and thus reduce the final accuracy of EEG emotion recognition.

摘要

背景

大脑活动的多媒体刺激对于情感诱导很重要。基于大脑活动,使用 EEG 信号进行情感识别已成为情感计算领域的热门话题。

新方法

在本文中,我们开发了一种新颖的气味-视频诱发生理信号数据库(OVPD),在该数据库中,我们收集了八个参与者在正性、中性和负性情绪状态下的 EEG 信号,这些参与者在同步传统视频内容与气味时会受到刺激。为了充分利用来自不同领域的 EEG 特征,我们设计了一种结合卷积神经网络(CNN)和双向长短期记忆(BiLSTM)的 3DCNN-BiLSTM 模型,用于 EEG 情感识别。首先,我们将 EEG 信号转换为保留空间、频率和时间信息的 4D 表示。然后,将表示输入到 3DCNN-BiLSTM 模型中以识别情绪。CNN 用于从 4D 表示中学习空间和频率信息。BiLSTM 用于提取向前和向后的时间依赖性。

结果

我们在 OVPD 数据集上进行了五次 5 折交叉验证实验,以评估模型的性能。实验结果表明,在嗅觉增强视频刺激下,我们提出的模型在 OVPD 数据集上的平均准确率为 98.29%,标准偏差为 0.72%,在正性、中性和负性情绪的三类分类中;在传统视频刺激下,平均准确率为 98.03%,标准偏差为 0.73%。为了验证我们提出的模型的泛化能力,我们还在公共 EEG 情感数据集(SEED)上评估了该方法。

与现有方法的比较

与其他基线方法相比,我们设计的模型在 OVPD 数据集上具有更好的识别性能。与其他基线方法相比,3DCNN-BiLSTM 模型和其他基线方法对嗅觉增强视频的响应中,正性、中性和负性情绪的平均准确率更高。

结论

通过融合 EEG 信号的空间-频率-时间特征,提出的 3DCNN-BiLSTM 模型对于情感识别是有效的。所提供的气味刺激可以比传统视频刺激引起更强的情感,并且在一定程度上提高情感识别的准确性。然而,叠加与视频场景无关的气味可能会分散参与者的注意力,从而降低最终的 EEG 情感识别准确性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验