• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于从脑电图和眼动信号中进行多模态情感识别的新型特征融合网络。

A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals.

作者信息

Fu Baole, Gu Chunrui, Fu Ming, Xia Yuxiao, Liu Yinhua

机构信息

School of Automation, Qingdao University, Qingdao, China.

Institute for Future, Qingdao University, Qingdao, China.

出版信息

Front Neurosci. 2023 Aug 3;17:1234162. doi: 10.3389/fnins.2023.1234162. eCollection 2023.

DOI:10.3389/fnins.2023.1234162
PMID:37600016
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10436100/
Abstract

Emotion recognition is a challenging task, and the use of multimodal fusion methods for emotion recognition has become a trend. Fusion vectors can provide a more comprehensive representation of changes in the subject's emotional state, leading to more accurate emotion recognition results. Different fusion inputs or feature fusion methods have varying effects on the final fusion outcome. In this paper, we propose a novel Multimodal Feature Fusion Neural Network model (MFFNN) that effectively extracts complementary information from eye movement signals and performs feature fusion with EEG signals. We construct a dual-branch feature extraction module to extract features from both modalities while ensuring temporal alignment. A multi-scale feature fusion module is introduced, which utilizes cross-channel soft attention to adaptively select information from different spatial scales, enabling the acquisition of features at different spatial scales for effective fusion. We conduct experiments on the publicly available SEED-IV dataset, and our model achieves an accuracy of 87.32% in recognizing four emotions (happiness, sadness, fear, and neutrality). The results demonstrate that the proposed model can better explore complementary information from EEG and eye movement signals, thereby improving accuracy, and stability in emotion recognition.

摘要

情感识别是一项具有挑战性的任务,使用多模态融合方法进行情感识别已成为一种趋势。融合向量可以更全面地表示主体情绪状态的变化,从而产生更准确的情感识别结果。不同的融合输入或特征融合方法对最终的融合结果有不同的影响。在本文中,我们提出了一种新颖的多模态特征融合神经网络模型(MFFNN),该模型能有效地从眼动信号中提取互补信息,并与脑电信号进行特征融合。我们构建了一个双分支特征提取模块,在确保时间对齐的同时从两种模态中提取特征。引入了一个多尺度特征融合模块,该模块利用跨通道软注意力从不同空间尺度自适应地选择信息,从而能够获取不同空间尺度的特征以进行有效融合。我们在公开可用的SEED-IV数据集上进行实验,我们的模型在识别四种情绪(快乐、悲伤、恐惧和中性)时达到了87.32%的准确率。结果表明,所提出的模型能够更好地从脑电和眼动信号中探索互补信息,从而提高情感识别的准确率和稳定性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1c9dc9927b6d/fnins-17-1234162-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1428ddf8627c/fnins-17-1234162-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/b530b85c2e26/fnins-17-1234162-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/f70aad7b54eb/fnins-17-1234162-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/e94c4c6601cc/fnins-17-1234162-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/f452980c30c4/fnins-17-1234162-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/e3de6709be55/fnins-17-1234162-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/8c67440a6d59/fnins-17-1234162-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/5edd11ad104e/fnins-17-1234162-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/2a62f775f08e/fnins-17-1234162-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/3cca5024536d/fnins-17-1234162-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/8ec1f7796960/fnins-17-1234162-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/76c3b7092b02/fnins-17-1234162-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/177dd87e4c50/fnins-17-1234162-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1ab2376bc353/fnins-17-1234162-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1c9dc9927b6d/fnins-17-1234162-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1428ddf8627c/fnins-17-1234162-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/b530b85c2e26/fnins-17-1234162-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/f70aad7b54eb/fnins-17-1234162-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/e94c4c6601cc/fnins-17-1234162-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/f452980c30c4/fnins-17-1234162-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/e3de6709be55/fnins-17-1234162-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/8c67440a6d59/fnins-17-1234162-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/5edd11ad104e/fnins-17-1234162-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/2a62f775f08e/fnins-17-1234162-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/3cca5024536d/fnins-17-1234162-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/8ec1f7796960/fnins-17-1234162-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/76c3b7092b02/fnins-17-1234162-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/177dd87e4c50/fnins-17-1234162-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1ab2376bc353/fnins-17-1234162-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0423/10436100/1c9dc9927b6d/fnins-17-1234162-g0015.jpg

相似文献

1
A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals.一种用于从脑电图和眼动信号中进行多模态情感识别的新型特征融合网络。
Front Neurosci. 2023 Aug 3;17:1234162. doi: 10.3389/fnins.2023.1234162. eCollection 2023.
2
Attention-based 3D convolutional recurrent neural network model for multimodal emotion recognition.基于注意力的多模态情感识别三维卷积递归神经网络模型
Front Neurosci. 2024 Jan 10;17:1330077. doi: 10.3389/fnins.2023.1330077. eCollection 2023.
3
Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals.跨模态引导神经网络用于从 EEG 和眼动信号中进行多模态情绪识别。
IEEE J Biomed Health Inform. 2024 Oct;28(10):5865-5876. doi: 10.1109/JBHI.2024.3419043. Epub 2024 Oct 3.
4
CDBA: a novel multi-branch feature fusion model for EEG-based emotion recognition.CDBA:一种用于基于脑电图的情绪识别的新型多分支特征融合模型。
Front Physiol. 2023 Jul 20;14:1200656. doi: 10.3389/fphys.2023.1200656. eCollection 2023.
5
Emotion recognition using spatial-temporal EEG features through convolutional graph attention network.基于卷积图注意网络的时空 EEG 特征的情绪识别。
J Neural Eng. 2023 Feb 14;20(1). doi: 10.1088/1741-2552/acb79e.
6
Spatial-frequency-temporal convolutional recurrent network for olfactory-enhanced EEG emotion recognition.基于空间频率-时间卷积循环网络的嗅觉增强脑电情感识别
J Neurosci Methods. 2022 Jul 1;376:109624. doi: 10.1016/j.jneumeth.2022.109624. Epub 2022 May 16.
7
CATM: A Multi-Feature-Based Cross-Scale Attentional Convolutional EEG Emotion Recognition Model.CATM:一种基于多特征的跨尺度注意力卷积 EEG 情绪识别模型。
Sensors (Basel). 2024 Jul 25;24(15):4837. doi: 10.3390/s24154837.
8
Multi-scale 3D-CRU for EEG emotion recognition.基于多尺度 3D-CRU 的脑电情感识别。
Biomed Phys Eng Express. 2024 May 14;10(4). doi: 10.1088/2057-1976/ad43f1.
9
Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG.基于面部表情、语音和脑电图的多模态情感识别
IEEE Open J Eng Med Biol. 2023 Jan 27;5:396-403. doi: 10.1109/OJEMB.2023.3240280. eCollection 2024.
10
Multidimensional Feature in Emotion Recognition Based on Multi-Channel EEG Signals.基于多通道脑电信号的情感识别中的多维特征
Entropy (Basel). 2022 Dec 15;24(12):1830. doi: 10.3390/e24121830.

引用本文的文献

1
Bangla Speech Emotion Recognition Using Deep Learning-Based Ensemble Learning and Feature Fusion.基于深度学习的集成学习和特征融合的孟加拉语语音情感识别
J Imaging. 2025 Aug 14;11(8):273. doi: 10.3390/jimaging11080273.
2
A Comprehensive Review of Multimodal Emotion Recognition: Techniques, Challenges, and Future Directions.多模态情感识别综述:技术、挑战与未来方向
Biomimetics (Basel). 2025 Jun 27;10(7):418. doi: 10.3390/biomimetics10070418.
3
[Dynamic continuous emotion recognition method based on electroencephalography and eye movement signals].

本文引用的文献

1
Subject-independent EEG classification based on a hybrid neural network.基于混合神经网络的独立于个体的脑电图分类
Front Neurosci. 2023 Jun 2;17:1124089. doi: 10.3389/fnins.2023.1124089. eCollection 2023.
2
Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection.基于多模态RSVP的目标检测的跨模态引导与重加权网络
Neural Netw. 2023 Apr;161:65-82. doi: 10.1016/j.neunet.2023.01.009. Epub 2023 Jan 16.
3
A Cross-modality Deep Learning Method for Measuring Decision Confidence from Eye Movement Signals.
基于脑电图和眼动信号的动态连续情感识别方法
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2025 Feb 25;42(1):32-41. doi: 10.7507/1001-5515.202408013.
4
Improved BCI calibration in multimodal emotion recognition using heterogeneous adversarial transfer learning.使用异构对抗性迁移学习改进多模态情感识别中的脑机接口校准
PeerJ Comput Sci. 2025 Jan 20;11:e2649. doi: 10.7717/peerj-cs.2649. eCollection 2025.
一种基于跨模态深度学习的从眼动信号中测量决策信心的方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3342-3345. doi: 10.1109/EMBC48229.2022.9871605.
4
Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals.基于多通道 EEG 信号的节律特定深度卷积神经网络技术的自动化精确情绪识别系统。
Comput Biol Med. 2021 Jul;134:104428. doi: 10.1016/j.compbiomed.2021.104428. Epub 2021 May 6.
5
Capsule Network for ERP Detection in Brain-Computer Interface.胶囊网络在脑机接口中的 ERP 检测。
IEEE Trans Neural Syst Rehabil Eng. 2021;29:718-730. doi: 10.1109/TNSRE.2021.3070327. Epub 2021 Apr 19.
6
EEG-based emotion recognition using 4D convolutional recurrent neural network.基于脑电图的情感识别:使用4D卷积递归神经网络
Cogn Neurodyn. 2020 Dec;14(6):815-828. doi: 10.1007/s11571-020-09634-1. Epub 2020 Sep 14.
7
Time-Frequency Representation and Convolutional Neural Network-Based Emotion Recognition.基于时频表示和卷积神经网络的情绪识别。
IEEE Trans Neural Netw Learn Syst. 2021 Jul;32(7):2901-2909. doi: 10.1109/TNNLS.2020.3008938. Epub 2021 Jul 6.
8
Deep learning for electroencephalogram (EEG) classification tasks: a review.深度学习在脑电图(EEG)分类任务中的应用:综述。
J Neural Eng. 2019 Jun;16(3):031001. doi: 10.1088/1741-2552/ab0ab5. Epub 2019 Feb 26.
9
Spatial-Temporal Recurrent Neural Network for Emotion Recognition.基于时空递归神经网络的情绪识别。
IEEE Trans Cybern. 2019 Mar;49(3):839-847. doi: 10.1109/TCYB.2017.2788081. Epub 2018 Jan 30.
10
EmotionMeter: A Multimodal Framework for Recognizing Human Emotions.情绪计量器:一种用于识别人类情绪的多模态框架。
IEEE Trans Cybern. 2019 Mar;49(3):1110-1122. doi: 10.1109/TCYB.2018.2797176. Epub 2018 Feb 8.