• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于多模态情感识别的新型信号到图像转换和特征级融合方法。

A novel signal to image transformation and feature level fusion for multimodal emotion recognition.

机构信息

Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey.

出版信息

Biomed Tech (Berl). 2021 Apr 7;66(4):353-362. doi: 10.1515/bmt-2020-0229. Print 2021 Aug 26.

DOI:10.1515/bmt-2020-0229
PMID:33823091
Abstract

Emotion is one of the most complex and difficult expression to be predicted. Nowadays, many recognition systems that use classification methods have focused on different types of emotion recognition problems. In this paper, we aimed to propose a multimodal fusion method between electroencephalography (EEG) and electrooculography (EOG) signals for emotion recognition. Therefore, before the feature extraction stage, we applied different angle-amplitude transformations to EEG-EOG signals. These transformations take arbitrary time domain signals and convert them two-dimensional images named as Angle-Amplitude Graph (AAG). Then, we extracted image-based features using a scale invariant feature transform method, fused these features originates basically from EEG-EOG and lastly classified with support vector machines. To verify the validity of these proposed methods, we performed experiments on the multimodal DEAP dataset which is a benchmark dataset widely used for emotion analysis with physiological signals. In the experiments, we applied the proposed emotion recognition procedures on the arousal-valence dimensions. We achieved (91.53%) accuracy for the arousal space and (90.31%) for the valence space after fusion. Experimental results showed that the combination of AAG image features belonging to EEG-EOG signals in the baseline angle amplitude transformation approaches enhanced the classification performance on the DEAP dataset.

摘要

情感是最复杂和难以预测的表达之一。如今,许多使用分类方法的识别系统已经专注于不同类型的情感识别问题。在本文中,我们旨在提出一种脑电(EEG)和眼电(EOG)信号之间的多模态融合方法,用于情感识别。因此,在特征提取阶段之前,我们将 EEG-EOG 信号应用于不同的角度-幅度变换。这些变换采用任意时域信号,并将其转换为二维图像,称为角度-幅度图(AAG)。然后,我们使用尺度不变特征变换方法提取基于图像的特征,融合这些特征基本上来源于 EEG-EOG,最后使用支持向量机进行分类。为了验证这些方法的有效性,我们在多模态 DEAP 数据集上进行了实验,该数据集是一个广泛用于生理信号情感分析的基准数据集。在实验中,我们在兴奋-效价维度上应用了所提出的情感识别程序。在融合后,我们在兴奋空间中达到了(91.53%)的准确率,在效价空间中达到了(90.31%)的准确率。实验结果表明,在基线角度幅度变换方法中,属于 EEG-EOG 信号的 AAG 图像特征的组合增强了在 DEAP 数据集上的分类性能。

相似文献

1
A novel signal to image transformation and feature level fusion for multimodal emotion recognition.一种用于多模态情感识别的新型信号到图像转换和特征级融合方法。
Biomed Tech (Berl). 2021 Apr 7;66(4):353-362. doi: 10.1515/bmt-2020-0229. Print 2021 Aug 26.
2
CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis.基于 CNN-XGBoost 融合的脑电频谱图图像分析情感状态识别。
Sci Rep. 2022 Aug 19;12(1):14122. doi: 10.1038/s41598-022-18257-x.
3
Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.用于唤醒/效价空间情绪评估的带特征选择的多输出支持向量机回归
Annu Int Conf IEEE Eng Med Biol Soc. 2014;2014:970-3. doi: 10.1109/EMBC.2014.6943754.
4
An Ensemble Learning Method for Emotion Charting Using Multimodal Physiological Signals.基于多模态生理信号的情绪图表分析的集成学习方法。
Sensors (Basel). 2022 Dec 4;22(23):9480. doi: 10.3390/s22239480.
5
Graph Theoretical Analysis of EEG Functional Connectivity Patterns and Fusion with Physiological Signals for Emotion Recognition.基于 EEG 功能连接模式的图论分析及其与生理信号的融合在情绪识别中的应用。
Sensors (Basel). 2022 Oct 26;22(21):8198. doi: 10.3390/s22218198.
6
EEG-Based Emotion Recognition Using Quadratic Time-Frequency Distribution.基于二次时频分布的脑电情绪识别。
Sensors (Basel). 2018 Aug 20;18(8):2739. doi: 10.3390/s18082739.
7
Feature selection for multimodal emotion recognition in the arousal-valence space.在唤醒-效价空间中进行多模态情感识别的特征选择
Annu Int Conf IEEE Eng Med Biol Soc. 2013;2013:4330-3. doi: 10.1109/EMBC.2013.6610504.
8
EEG emotion recognition based on data-driven signal auto-segmentation and feature fusion.基于数据驱动信号自动分割与特征融合的脑电图情感识别
J Affect Disord. 2024 Sep 15;361:356-366. doi: 10.1016/j.jad.2024.06.042. Epub 2024 Jun 15.
9
Fusion of Motif- and Spectrum-Related Features for Improved EEG-Based Emotion Recognition.基于特征融合的脑电信号情感识别方法研究
Comput Intell Neurosci. 2019 Jan 17;2019:3076324. doi: 10.1155/2019/3076324. eCollection 2019.
10
Fused CNN-LSTM deep learning emotion recognition model using electroencephalography signals.基于脑电图信号的融合 CNN-LSTM 深度学习情绪识别模型。
Int J Neurosci. 2023 Jun;133(6):587-597. doi: 10.1080/00207454.2021.1941947. Epub 2021 Aug 27.

引用本文的文献

1
MSDSANet: Multimodal Emotion Recognition Based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation.MSDSANet:基于多流网络和双尺度注意力网络特征表示的多模态情感识别
Sensors (Basel). 2025 Mar 24;25(7):2029. doi: 10.3390/s25072029.
2
PSPN: Pseudo-Siamese Pyramid Network for multimodal emotion analysis.PSPN:用于多模态情感分析的伪连体金字塔网络。
Cogn Neurodyn. 2024 Oct;18(5):2883-2896. doi: 10.1007/s11571-024-10123-y. Epub 2024 May 28.
3
A multi-stage dynamical fusion network for multimodal emotion recognition.
一种用于多模态情感识别的多阶段动态融合网络。
Cogn Neurodyn. 2023 Jun;17(3):671-680. doi: 10.1007/s11571-022-09851-w. Epub 2022 Jul 31.
4
A systematic comparison of deep learning methods for EEG time series analysis.用于脑电图时间序列分析的深度学习方法的系统比较。
Front Neuroinform. 2023 Feb 23;17:1067095. doi: 10.3389/fninf.2023.1067095. eCollection 2023.
5
Recognition of single upper limb motor imagery tasks from EEG using multi-branch fusion convolutional neural network.基于多分支融合卷积神经网络的脑电信号单上肢运动想象任务识别
Front Neurosci. 2023 Feb 22;17:1129049. doi: 10.3389/fnins.2023.1129049. eCollection 2023.
6
EEG emotion recognition based on cross-frequency granger causality feature extraction and fusion in the left and right hemispheres.基于跨频率格兰杰因果关系特征提取与左右半球融合的脑电图情感识别
Front Neurosci. 2022 Sep 7;16:974673. doi: 10.3389/fnins.2022.974673. eCollection 2022.