• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于时齐机制的判别相关融合的多模态生理信号情感识别

Emotion Recognition From Multimodal Physiological Signals via Discriminative Correlation Fusion With a Temporal Alignment Mechanism.

出版信息

IEEE Trans Cybern. 2024 May;54(5):3079-3092. doi: 10.1109/TCYB.2023.3320107. Epub 2024 Apr 16.

DOI:10.1109/TCYB.2023.3320107
PMID:37862275
Abstract

Modeling correlations between multimodal physiological signals [e.g., canonical correlation analysis (CCA)] for emotion recognition has attracted much attention. However, existing studies rarely consider the neural nature of emotional responses within physiological signals. Furthermore, during fusion space construction, the CCA method maximizes only the correlations between different modalities and neglects the discriminative information of different emotional states. Most importantly, temporal mismatches between different neural activities are often ignored; therefore, the theoretical assumptions that multimodal data should be aligned in time and space before fusion are not fulfilled. To address these issues, we propose a discriminative correlation fusion method coupled with a temporal alignment mechanism for multimodal physiological signals. We first use neural signal analysis techniques to construct neural representations of the central nervous system (CNS) and autonomic nervous system (ANS). respectively. Then, emotion class labels are introduced in CCA to obtain more discriminative fusion representations from multimodal neural responses, and the temporal alignment between the CNS and ANS is jointly optimized with a fusion procedure that applies the Bayesian algorithm. The experimental results demonstrate that our method significantly improves the emotion recognition performance. Additionally, we show that this fusion method can model the underlying mechanisms in human nervous systems during emotional responses, and our results are consistent with prior findings. This study may guide a new approach for exploring human cognitive function based on physiological signals at different time scales and promote the development of computational intelligence and harmonious human-computer interactions.

摘要

多模态生理信号(如典型相关分析(CCA))之间相关性的建模对于情感识别已经引起了广泛关注。然而,现有研究很少考虑生理信号中情感反应的神经本质。此外,在融合空间构建过程中,CCA 方法仅最大化不同模态之间的相关性,而忽略了不同情绪状态的判别信息。最重要的是,不同神经活动之间的时间失配通常被忽略;因此,融合前多模态数据应在时间和空间上对齐的理论假设并不成立。为了解决这些问题,我们提出了一种基于判别相关融合的方法,并结合了一种多模态生理信号的时间对齐机制。我们首先使用神经信号分析技术分别构建中枢神经系统(CNS)和自主神经系统(ANS)的神经表示。然后,在 CCA 中引入情感类别标签,从多模态神经反应中获得更具判别力的融合表示,并通过应用贝叶斯算法的融合过程联合优化 CNS 和 ANS 之间的时间对齐。实验结果表明,我们的方法显著提高了情感识别性能。此外,我们还表明,这种融合方法可以模拟人类神经系统在情感反应过程中的潜在机制,并且我们的结果与已有发现一致。这项研究可能为基于不同时间尺度的生理信号探索人类认知功能提供一种新的方法,并促进计算智能和人机和谐交互的发展。

相似文献

1
Emotion Recognition From Multimodal Physiological Signals via Discriminative Correlation Fusion With a Temporal Alignment Mechanism.基于时齐机制的判别相关融合的多模态生理信号情感识别
IEEE Trans Cybern. 2024 May;54(5):3079-3092. doi: 10.1109/TCYB.2023.3320107. Epub 2024 Apr 16.
2
Emotion Recognition From Multimodal Physiological Signals Using a Regularized Deep Fusion of Kernel Machine.基于正则化核机器深度融合的多模态生理信号情感识别
IEEE Trans Cybern. 2021 Sep;51(9):4386-4399. doi: 10.1109/TCYB.2020.2987575. Epub 2021 Sep 15.
3
A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions.基于广义混合函数的用于用户体验评估的混合多模态情感识别框架。
Sensors (Basel). 2023 Apr 28;23(9):4373. doi: 10.3390/s23094373.
4
A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals.一种用于从脑电图和眼动信号中进行多模态情感识别的新型特征融合网络。
Front Neurosci. 2023 Aug 3;17:1234162. doi: 10.3389/fnins.2023.1234162. eCollection 2023.
5
A multi-stage dynamical fusion network for multimodal emotion recognition.一种用于多模态情感识别的多阶段动态融合网络。
Cogn Neurodyn. 2023 Jun;17(3):671-680. doi: 10.1007/s11571-022-09851-w. Epub 2022 Jul 31.
6
FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network.FusionSense:基于脑启发的尖峰神经网络的多模态数据特征融合和深度学习的情感分类。
Sensors (Basel). 2020 Sep 17;20(18):5328. doi: 10.3390/s20185328.
7
Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals.基于前额生物电势和生理信号动态自适应融合的可靠情绪识别系统。
Comput Methods Programs Biomed. 2015 Nov;122(2):149-64. doi: 10.1016/j.cmpb.2015.07.006. Epub 2015 Jul 29.
8
E-MFNN: an emotion-multimodal fusion neural network framework for emotion recognition.E-MFNN:一种用于情感识别的情感多模态融合神经网络框架。
PeerJ Comput Sci. 2024 Apr 19;10:e1977. doi: 10.7717/peerj-cs.1977. eCollection 2024.
9
Research on cross-modal emotion recognition based on multi-layer semantic fusion.基于多层语义融合的跨模态情感识别研究
Math Biosci Eng. 2024 Jan 17;21(2):2488-2514. doi: 10.3934/mbe.2024110.
10
Discriminative Multiple Canonical Correlation Analysis for Information Fusion.鉴别性多重典型相关分析在信息融合中的应用。
IEEE Trans Image Process. 2018 Apr;27(4):1951-1965. doi: 10.1109/TIP.2017.2765820. Epub 2017 Oct 23.

引用本文的文献

1
BrainFusion: a Low-Code, Reproducible, and Deployable Software Framework for Multimodal Brain‒Computer Interface and Brain‒Body Interaction Research.BrainFusion:用于多模态脑机接口和脑-体交互研究的低代码、可重现且可部署的软件框架。
Adv Sci (Weinh). 2025 Aug;12(32):e17408. doi: 10.1002/advs.202417408. Epub 2025 Jun 5.