• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多尺度 3D-CRU 的脑电情感识别。

Multi-scale 3D-CRU for EEG emotion recognition.

机构信息

School of Computer Science and Technology, Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, Hefei, Anhui, People's Republic of China.

School of Biological Science and Medical Engineering, Key Laboratory of Child Development and Learning Science, Southeast University, Nanjing 210096, People's Republic of China.

出版信息

Biomed Phys Eng Express. 2024 May 14;10(4). doi: 10.1088/2057-1976/ad43f1.

DOI:10.1088/2057-1976/ad43f1
PMID:38670076
Abstract

In this paper, we propose a novel multi-scale 3D-CRU model, with the goal of extracting more discriminative emotion feature from EEG signals. By concurrently exploiting the relative electrode locations and different frequency subbands of EEG signals, a three-dimensional feature representation is reconstructed wherein the Delta () frequency pattern is included. We employ a multi-scale approach, termed 3D-CRU, to concurrently extract frequency and spatial features at varying levels of granularity within each time segment. In the proposed 3D-CRU, we introduce a multi-scale 3D Convolutional Neural Network (3D-CNN) to effectively capture discriminative information embedded within the 3D feature representation. To model the temporal dynamics across consecutive time segments, we incorporate a Gated Recurrent Unit (GRU) module to extract temporal representations from the time series of combined frequency-spatial features. Ultimately, the 3D-CRU model yields a global feature representation, encompassing comprehensive information across time, frequency, and spatial domains. Numerous experimental assessments conducted on publicly available DEAP and SEED databases provide empirical evidence supporting the enhanced performance of our proposed model in the domain of emotion recognition. These findings underscore the efficacy of the features extracted by the proposed multi-scale 3D-GRU model, particularly with the incorporation of the Delta () frequency pattern. Specifically, on the DEAP dataset, the accuracy of Valence and Arousal are 93.12% and 94.31%, respectively, while on the SEED dataset, the accuracy is 92.25%.

摘要

在本文中,我们提出了一种新颖的多尺度 3D-CRU 模型,旨在从 EEG 信号中提取更具判别性的情感特征。通过同时利用 EEG 信号的相对电极位置和不同的频率子带,重建了一个包含 Delta()频率模式的三维特征表示。我们采用了一种多尺度方法,称为 3D-CRU,在每个时间片段内的不同粒度级别上同时提取频率和空间特征。在提出的 3D-CRU 中,我们引入了一个多尺度 3D 卷积神经网络(3D-CNN),以有效地捕获嵌入在 3D 特征表示中的判别信息。为了对连续时间片段之间的时间动态进行建模,我们引入了一个门控循环单元(GRU)模块,从组合频率-空间特征的时间序列中提取时间表示。最终,3D-CRU 模型产生了一个全局特征表示,涵盖了时间、频率和空间域的全面信息。在公开可用的 DEAP 和 SEED 数据库上进行的大量实验评估提供了实证证据,支持我们提出的模型在情感识别领域的卓越性能。这些发现突出了所提出的多尺度 3D-GRU 模型提取的特征的有效性,特别是在包含 Delta()频率模式的情况下。具体来说,在 DEAP 数据集上,效价和唤醒度的准确率分别为 93.12%和 94.31%,而在 SEED 数据集上,准确率为 92.25%。

相似文献

1
Multi-scale 3D-CRU for EEG emotion recognition.基于多尺度 3D-CRU 的脑电情感识别。
Biomed Phys Eng Express. 2024 May 14;10(4). doi: 10.1088/2057-1976/ad43f1.
2
Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture.基于混合时空门控循环单元-卷积(GRU-Conv)架构的独立于主体的脑电图情感识别
Med Biol Eng Comput. 2023 Jan;61(1):61-73. doi: 10.1007/s11517-022-02686-x. Epub 2022 Nov 2.
3
Spatio-Temporal Representation of an Electoencephalogram for Emotion Recognition Using a Three-Dimensional Convolutional Neural Network.使用三维卷积神经网络进行情感识别的脑电图的时空表示。
Sensors (Basel). 2020 Jun 20;20(12):3491. doi: 10.3390/s20123491.
4
CATM: A Multi-Feature-Based Cross-Scale Attentional Convolutional EEG Emotion Recognition Model.CATM:一种基于多特征的跨尺度注意力卷积 EEG 情绪识别模型。
Sensors (Basel). 2024 Jul 25;24(15):4837. doi: 10.3390/s24154837.
5
EEG-based emotion recognition using multi-scale dynamic CNN and gated transformer.基于脑电图的情感识别:使用多尺度动态卷积神经网络和门控变换器
Sci Rep. 2024 Dec 28;14(1):31319. doi: 10.1038/s41598-024-82705-z.
6
An EEG-based emotion recognition method by fusing multi-frequency-spatial features under multi-frequency bands.一种基于脑电图的多频段多频空间特征融合情感识别方法。
J Neurosci Methods. 2025 Mar;415:110360. doi: 10.1016/j.jneumeth.2025.110360. Epub 2025 Jan 6.
7
Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-[Formula: see text] model.基于CNN-KAN-[公式:见原文]模型的稀疏脑电图通道情感识别及跨主体建模研究。
PLoS One. 2025 May 27;20(5):e0322583. doi: 10.1371/journal.pone.0322583. eCollection 2025.
8
Accelerating 3D Convolutional Neural Network with Channel Bottleneck Module for EEG-Based Emotion Recognition.基于 EEG 的情绪识别中使用通道瓶颈模块加速 3D 卷积神经网络。
Sensors (Basel). 2022 Sep 8;22(18):6813. doi: 10.3390/s22186813.
9
Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals.基于多通道 EEG 信号的节律特定深度卷积神经网络技术的自动化精确情绪识别系统。
Comput Biol Med. 2021 Jul;134:104428. doi: 10.1016/j.compbiomed.2021.104428. Epub 2021 May 6.
10
An Attention-Based Multi-Domain Bi-Hemisphere Discrepancy Feature Fusion Model for EEG Emotion Recognition.基于注意力的多域双半球差异特征融合模型在 EEG 情绪识别中的应用。
IEEE J Biomed Health Inform. 2024 Oct;28(10):5890-5903. doi: 10.1109/JBHI.2024.3418010. Epub 2024 Oct 3.

引用本文的文献

1
Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-[Formula: see text] model.基于CNN-KAN-[公式:见原文]模型的稀疏脑电图通道情感识别及跨主体建模研究。
PLoS One. 2025 May 27;20(5):e0322583. doi: 10.1371/journal.pone.0322583. eCollection 2025.
2
Design and analysis of teaching early warning system based on multimodal data in an intelligent learning environment.智能学习环境下基于多模态数据的教学预警系统设计与分析
PeerJ Comput Sci. 2025 Mar 4;11:e2692. doi: 10.7717/peerj-cs.2692. eCollection 2025.
3
CATM: A Multi-Feature-Based Cross-Scale Attentional Convolutional EEG Emotion Recognition Model.
CATM:一种基于多特征的跨尺度注意力卷积 EEG 情绪识别模型。
Sensors (Basel). 2024 Jul 25;24(15):4837. doi: 10.3390/s24154837.