• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于减少个体差异的情感识别时空特征融合网络。

A temporal-spatial feature fusion network for emotion recognition with individual differences reduction.

作者信息

Liu Benke, Wang Yongxiong, Wang Zhe, Wan Xin, Li Chenguang

机构信息

University of Shanghai for Science and Technology, Shanghai 200093, China.

University of Shanghai for Science and Technology, Shanghai 200093, China.

出版信息

Neuroscience. 2025 Mar 17;569:195-209. doi: 10.1016/j.neuroscience.2025.01.049. Epub 2025 Jan 30.

DOI:10.1016/j.neuroscience.2025.01.049
PMID:39892815
Abstract

PURPOSE

In the context of EEG-based emotion recognition tasks, a conventional strategy involves the extraction of spatial and temporal features, subsequently fused for emotion prediction. However, due to the pronounced individual variability in EEG and the constrained performance of conventional time-series models, cross-subject experiments often yield suboptimal results. To address this limitation, we propose a novel network named Time-Space Emotion Network (TSEN), which capitalizes on the fusion of spatiotemporal information for emotion recognition.

METHODS

Diverging from prior models that integrate temporal and spatial features, our network introduces a Convolutional Block Attention Module (CBAM) during spatial feature extraction to judiciously allocate weights to feature channels and spatial positions. Furthermore, we bolster network stability and improve domain adaptation through the incorporation of a residual block featuring Switchable Whitening (SW). Temporal feature extraction is accomplished using a Temporal Convolutional Network (TCN), ensuring elevated prediction accuracy while maintaining a lightweight network structure.

RESULTS

We conduct experiments on the preprocessed DEAP dataset. Ultimately, the average accuracy for arousal prediction is 0.7032 with a variance of 0.0876, and the F1 score is 0.6843. For valence prediction, the accuracy is 0.6792 with a variance of 0.0853, and the F1 score is 0.6826.

CONCLUSION

TSEN exhibits high accuracy and low variance in cross-subject emotion prediction tasks, effectively reducing individual differences among different subjects. Additionally, TSEN has a smaller parameter count, enabling faster execution.

摘要

目的

在基于脑电图的情绪识别任务中,传统策略包括提取空间和时间特征,随后将其融合用于情绪预测。然而,由于脑电图中明显的个体差异以及传统时间序列模型的性能受限,跨主体实验往往产生次优结果。为解决这一局限性,我们提出了一种名为时空情绪网络(TSEN)的新型网络,该网络利用时空信息融合进行情绪识别。

方法

与整合时间和空间特征的先前模型不同,我们的网络在空间特征提取过程中引入了卷积块注意力模块(CBAM),以便明智地为特征通道和空间位置分配权重。此外,我们通过合并具有可切换白化(SW)的残差块来增强网络稳定性并改善域适应性。使用时间卷积网络(TCN)完成时间特征提取,在保持轻量级网络结构的同时确保提高预测准确性。

结果

我们在预处理的DEAP数据集上进行实验。最终,唤醒预测的平均准确率为0.7032,方差为0.0876,F1分数为0.6843。对于效价预测,准确率为0.6792,方差为0.0853,F1分数为0.6826。

结论

TSEN在跨主体情绪预测任务中表现出高准确率和低方差,有效减少了不同主体之间的个体差异。此外,TSEN的参数数量较少,能够更快地执行。

相似文献

1
A temporal-spatial feature fusion network for emotion recognition with individual differences reduction.一种用于减少个体差异的情感识别时空特征融合网络。
Neuroscience. 2025 Mar 17;569:195-209. doi: 10.1016/j.neuroscience.2025.01.049. Epub 2025 Jan 30.
2
Emotion recognition using spatial-temporal EEG features through convolutional graph attention network.基于卷积图注意网络的时空 EEG 特征的情绪识别。
J Neural Eng. 2023 Feb 14;20(1). doi: 10.1088/1741-2552/acb79e.
3
An EEG-based emotion recognition method by fusing multi-frequency-spatial features under multi-frequency bands.一种基于脑电图的多频段多频空间特征融合情感识别方法。
J Neurosci Methods. 2025 Mar;415:110360. doi: 10.1016/j.jneumeth.2025.110360. Epub 2025 Jan 6.
4
CATM: A Multi-Feature-Based Cross-Scale Attentional Convolutional EEG Emotion Recognition Model.CATM:一种基于多特征的跨尺度注意力卷积 EEG 情绪识别模型。
Sensors (Basel). 2024 Jul 25;24(15):4837. doi: 10.3390/s24154837.
5
Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks.基于频带注意力图卷积对抗神经网络的脑机接口跨主体情绪识别。
J Neurosci Methods. 2024 Nov;411:110276. doi: 10.1016/j.jneumeth.2024.110276. Epub 2024 Sep 3.
6
SST-CRAM: spatial-spectral-temporal based convolutional recurrent neural network with lightweight attention mechanism for EEG emotion recognition.SST-CRAM:基于空间-光谱-时间的卷积循环神经网络,用于脑电情感识别,带有轻量级注意力机制。
Cogn Neurodyn. 2024 Oct;18(5):2621-2635. doi: 10.1007/s11571-024-10114-z. Epub 2024 Apr 30.
7
Accelerating 3D Convolutional Neural Network with Channel Bottleneck Module for EEG-Based Emotion Recognition.基于 EEG 的情绪识别中使用通道瓶颈模块加速 3D 卷积神经网络。
Sensors (Basel). 2022 Sep 8;22(18):6813. doi: 10.3390/s22186813.
8
Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition.研究基于预训练卷积神经网络的跨被试和跨数据集 EEG 情绪识别
Sensors (Basel). 2020 Apr 4;20(7):2034. doi: 10.3390/s20072034.
9
FC-TFS-CGRU: A Temporal-Frequency-Spatial Electroencephalography Emotion Recognition Model Based on Functional Connectivity and a Convolutional Gated Recurrent Unit Hybrid Architecture.基于功能连接和卷积门控循环单元混合架构的时频空脑电情感识别模型:FC-TFS-CGRU
Sensors (Basel). 2024 Mar 20;24(6):1979. doi: 10.3390/s24061979.
10
Attention-based 3D convolutional recurrent neural network model for multimodal emotion recognition.基于注意力的多模态情感识别三维卷积递归神经网络模型
Front Neurosci. 2024 Jan 10;17:1330077. doi: 10.3389/fnins.2023.1330077. eCollection 2023.