Suppr超能文献

MSDSANet:基于多流网络和双尺度注意力网络特征表示的多模态情感识别

MSDSANet: Multimodal Emotion Recognition Based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation.

作者信息

Sun Weitong, Yan Xingya, Su Yuping, Wang Gaihua, Zhang Yumei

机构信息

School of Digital Art, Xi'an University of Posts & Telecommunications, Xi'an 710061, China.

Key Laboratory of Intelligent Media in Shaanxi Province Colleges and Universities, Xi'an 710061, China.

出版信息

Sensors (Basel). 2025 Mar 24;25(7):2029. doi: 10.3390/s25072029.

Abstract

Aiming at the shortcomings of EEG emotion recognition models in feature representation granularity and spatiotemporal dependence modeling, a multimodal emotion recognition model integrating multi-scale feature representation and attention mechanism is proposed. The model consists of a feature extraction module, feature fusion module, and classification module. The feature extraction module includes a multi-stream network module for extracting shallow EEG features and a dual-scale attention module for extracting shallow EOG features. The multi-scale and multi-granularity feature fusion improves the richness and discriminability of multimodal feature representation. Experimental results on two datasets show that the proposed model outperforms the existing model.

摘要

针对脑电情感识别模型在特征表示粒度和时空依赖性建模方面的不足,提出了一种融合多尺度特征表示和注意力机制的多模态情感识别模型。该模型由特征提取模块、特征融合模块和分类模块组成。特征提取模块包括用于提取浅层脑电特征的多流网络模块和用于提取浅层眼电特征的双尺度注意力模块。多尺度和多粒度特征融合提高了多模态特征表示的丰富性和可区分性。在两个数据集上的实验结果表明,所提模型优于现有模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e396/11991317/5c44bc855363/sensors-25-02029-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验