Suppr超能文献

一种用于从脑电图和眼动信号中进行多模态情感识别的新型特征融合网络。

A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals.

作者信息

Fu Baole, Gu Chunrui, Fu Ming, Xia Yuxiao, Liu Yinhua

机构信息

School of Automation, Qingdao University, Qingdao, China.

Institute for Future, Qingdao University, Qingdao, China.

出版信息

Front Neurosci. 2023 Aug 3;17:1234162. doi: 10.3389/fnins.2023.1234162. eCollection 2023.

Abstract

Emotion recognition is a challenging task, and the use of multimodal fusion methods for emotion recognition has become a trend. Fusion vectors can provide a more comprehensive representation of changes in the subject's emotional state, leading to more accurate emotion recognition results. Different fusion inputs or feature fusion methods have varying effects on the final fusion outcome. In this paper, we propose a novel Multimodal Feature Fusion Neural Network model (MFFNN) that effectively extracts complementary information from eye movement signals and performs feature fusion with EEG signals. We construct a dual-branch feature extraction module to extract features from both modalities while ensuring temporal alignment. A multi-scale feature fusion module is introduced, which utilizes cross-channel soft attention to adaptively select information from different spatial scales, enabling the acquisition of features at different spatial scales for effective fusion. We conduct experiments on the publicly available SEED-IV dataset, and our model achieves an accuracy of 87.32% in recognizing four emotions (happiness, sadness, fear, and neutrality). The results demonstrate that the proposed model can better explore complementary information from EEG and eye movement signals, thereby improving accuracy, and stability in emotion recognition.

摘要

情感识别是一项具有挑战性的任务,使用多模态融合方法进行情感识别已成为一种趋势。融合向量可以更全面地表示主体情绪状态的变化,从而产生更准确的情感识别结果。不同的融合输入或特征融合方法对最终的融合结果有不同的影响。在本文中,我们提出了一种新颖的多模态特征融合神经网络模型(MFFNN),该模型能有效地从眼动信号中提取互补信息,并与脑电信号进行特征融合。我们构建了一个双分支特征提取模块,在确保时间对齐的同时从两种模态中提取特征。引入了一个多尺度特征融合模块,该模块利用跨通道软注意力从不同空间尺度自适应地选择信息,从而能够获取不同空间尺度的特征以进行有效融合。我们在公开可用的SEED-IV数据集上进行实验,我们的模型在识别四种情绪(快乐、悲伤、恐惧和中性)时达到了87.32%的准确率。结果表明,所提出的模型能够更好地从脑电和眼动信号中探索互补信息,从而提高情感识别的准确率和稳定性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1428ddf8627c/fnins-17-1234162-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验