Sun Weitong, Yan Xingya, Su Yuping, Wang Gaihua, Zhang Yumei
School of Digital Art, Xi'an University of Posts & Telecommunications, Xi'an 710061, China.
Key Laboratory of Intelligent Media in Shaanxi Province Colleges and Universities, Xi'an 710061, China.
Sensors (Basel). 2025 Mar 24;25(7):2029. doi: 10.3390/s25072029.
Aiming at the shortcomings of EEG emotion recognition models in feature representation granularity and spatiotemporal dependence modeling, a multimodal emotion recognition model integrating multi-scale feature representation and attention mechanism is proposed. The model consists of a feature extraction module, feature fusion module, and classification module. The feature extraction module includes a multi-stream network module for extracting shallow EEG features and a dual-scale attention module for extracting shallow EOG features. The multi-scale and multi-granularity feature fusion improves the richness and discriminability of multimodal feature representation. Experimental results on two datasets show that the proposed model outperforms the existing model.
针对脑电情感识别模型在特征表示粒度和时空依赖性建模方面的不足,提出了一种融合多尺度特征表示和注意力机制的多模态情感识别模型。该模型由特征提取模块、特征融合模块和分类模块组成。特征提取模块包括用于提取浅层脑电特征的多流网络模块和用于提取浅层眼电特征的双尺度注意力模块。多尺度和多粒度特征融合提高了多模态特征表示的丰富性和可区分性。在两个数据集上的实验结果表明,所提模型优于现有模型。