Suppr超能文献

MB-MSTFNet:一种用于基于脑电图传感器的情绪识别的多频段时空注意力网络。

MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition.

作者信息

Fang Cheng, Liu Sitong, Gao Bing

机构信息

Key Laboratory of Civil Aviation Thermal Hazards Prevention and Emergency Response, Civil Aviation University of China, Tianjin 300300, China.

College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China.

出版信息

Sensors (Basel). 2025 Aug 5;25(15):4819. doi: 10.3390/s25154819.

Abstract

Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human-machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band-space-time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition.

摘要

基于脑电图(EEG)传感器的情感分析对于人机交互至关重要,但在从多通道传感器信号进行时空特征融合以及跨频段和脑区整合方面面临关键挑战。本文提出了MB-MSTFNet,一种用于EEG情感识别的新型框架。该模型构建了一个3D张量来编码传感器数据的频段-空间-时间相关性,明确地对跨脑区的EEG传感器的频域动态和空间分布进行建模。一个多尺度CNN-Inception模块通过不同的卷积核和池化操作提取分层空间特征,捕捉局部传感器激活和全局脑网络交互。双向门控循环单元(BiGRU)对传感器时间序列中的时间依赖性进行建模,擅长捕捉长期动态模式。多头自注意力通过为相关传感器通道分配自适应权重来突出关键时间窗口和脑区,抑制来自非贡献电极的噪声。在包含多通道EEG传感器记录的DEAP数据集上进行的实验表明,MB-MSTFNet在二分类任务中的效价准确率达到96.80±0.92%,唤醒准确率达到98.02±0.76%,在四分类任务中的准确率达到92.85±1.45%。消融研究验证了特征融合、双向时间建模和多尺度机制通过提高特征互补性显著提升了性能。这个传感器驱动的框架通过整合EEG传感器信号的时空动态和多频段交互推进了情感计算,实现了高效的实时情感识别。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/35bd/12349021/0472192987c0/sensors-25-04819-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验