Suppr超能文献

用于情感识别的具有交叉注意力机制的多分支卷积神经网络。

Multi-branch convolutional neural network with cross-attention mechanism for emotion recognition.

作者信息

Yan Fei, Guo Zekai, Iliyasu Abdullah M, Hirota Kaoru

机构信息

School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China.

College of Engineering, Prince Sattam Bin Abdulaziz University, Al-Kharj, 11942, Saudi Arabia.

出版信息

Sci Rep. 2025 Feb 1;15(1):3976. doi: 10.1038/s41598-025-88248-1.

Abstract

Research on emotion recognition is an interesting area because of its wide-ranging applications in education, marketing, and medical fields. This study proposes a multi-branch convolutional neural network model based on cross-attention mechanism (MCNN-CA) for accurate recognition of different emotions. The proposed model provides automated extraction of relevant features from multimodal data and fusion of feature maps from diverse sources as modules for the subsequent emotion recognition. In the feature extraction stage, various convolutional neural networks were designed to extract critical information from multiple dimensional features. The feature fusion module was used to enhance the inter-correlation between features based on channel-efficient attention mechanism. This innovation proves effective in fusing distinctive features within a single mode and across different modes. The model was assessed based on EEG emotion recognition experiments on the SEED and SEED-IV datasets. Furthermore, the efficiency of the proposed model was evaluated via multimodal emotion experiments using EEG and text data from the ZuCo dataset. Comparative analysis alongside contemporary studies shows that our model excels in terms of accuracy, precision, recall, and F1-score.

摘要

由于情感识别在教育、营销和医学领域有着广泛的应用,因此对其进行研究是一个有趣的领域。本研究提出了一种基于交叉注意力机制的多分支卷积神经网络模型(MCNN-CA),用于准确识别不同的情感。所提出的模型提供了从多模态数据中自动提取相关特征以及融合来自不同来源的特征图,作为后续情感识别的模块。在特征提取阶段,设计了各种卷积神经网络以从多个维度特征中提取关键信息。特征融合模块基于通道高效注意力机制用于增强特征之间的相互关联。这一创新在融合单模态内和不同模态间的独特特征方面被证明是有效的。该模型基于对SEED和SEED-IV数据集的脑电图情感识别实验进行了评估。此外,通过使用来自ZuCo数据集的脑电图和文本数据进行多模态情感实验,对所提出模型的效率进行了评估。与当代研究的对比分析表明,我们的模型在准确率、精确率、召回率和F1分数方面表现出色。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/04a9/11787301/4659350fc507/41598_2025_88248_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验