Suppr超能文献

多模态情绪分类方法与脑功能连接网络分析。

Multimodal Emotion Classification Method and Analysis of Brain Functional Connectivity Networks.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2022;30:2022-2031. doi: 10.1109/TNSRE.2022.3192533. Epub 2022 Jul 26.

Abstract

Since multimodal emotion classification in different human states has rarely been studied, this paper explores the emotional mechanisms of the brain functional connectivity networks after emotional stimulation. We devise a multimodal emotion classification method fusing a brain functional connectivity network based on electroencephalography (EEG) and eye gaze (ECFCEG) to study emotional mechanisms. First, the nonlinear phase lag index (PLI) and phase-locked value (PLV) are calculated to construct the multiband brain functional connectivity networks, which are then converted into binary brain networks, and the seven features of the binary brain networks are extracted. At the same time, the features of the eye gaze signals are extracted. Then, a fusion algorithm called kernel canonical correlation analysis, based on feature level and randomization (FRKCCA), is executed for feature-level fusion (FLF) of brain functional connectivity networks and eye gaze. Finally, support vector machines (SVMs) are utilized to classify positive and negative emotions in multiple frequency bands with single modal features and multimodal features. The experimental results demonstrate that multimodal complementary representation properties can effectively improve the accuracy of emotion classification, achieving a classification accuracy of 91.32±1.81%. The classification accuracy of pupil diameter in the valence dimension is higher than that of additional features. In addition, the average emotion classification effect of the valence dimension is preferable to that of arousal. Our findings demonstrate that the brain functional connectivity networks of the right brain exhibit a deficiency. In particular, the information processing ability of the right temporal (RT) and right posterior (RP) regions is weak in the low frequency after emotional stimulation; Conversely, phase synchronization of the brain functional connectivity networks based on PLI is stronger than that of PLV.

摘要

由于对不同人类状态下的多模态情感分类研究甚少,本文探讨了情感刺激后大脑功能连接网络的情感机制。我们设计了一种融合基于脑电图(EEG)和眼动(ECFCEG)的大脑功能连接网络的多模态情感分类方法,以研究情感机制。首先,计算非线性相位滞后指数(PLI)和锁相值(PLV),构建多波段大脑功能连接网络,然后将其转换为二进制大脑网络,并提取二进制大脑网络的七个特征。同时,提取眼动信号的特征。然后,执行一种称为核典型相关分析的融合算法,基于特征级和随机化(FRKCCA),对大脑功能连接网络和眼动的特征级融合(FLF)进行特征级融合。最后,利用支持向量机(SVM)对单模态特征和多模态特征在多个频带内进行正、负情绪分类。实验结果表明,多模态互补表示特性可以有效提高情绪分类的准确性,达到 91.32±1.81%的分类准确率。瞳孔直径在效价维度的分类准确率高于其他附加特征。此外,效价维度的平均情绪分类效果优于唤醒度。我们的研究结果表明,大脑右半球的功能连接网络存在不足。特别是,情绪刺激后低频域大脑右颞(RT)和右后(RP)区域的信息处理能力较弱;相反,基于 PLI 的大脑功能连接网络的相位同步性比 PLV 更强。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验