Suppr超能文献

基于面部表情与脑电图地形图融合的听力障碍受试者情绪识别

Emotion Recognition of Subjects With Hearing Impairment Based on Fusion of Facial Expression and EEG Topographic Map.

作者信息

Li Dahua, Liu Jiayin, Yang Yi, Hou Fazheng, Song Haotian, Song Yu, Gao Qiang, Mao Zemin

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2023;31:437-445. doi: 10.1109/TNSRE.2022.3225948. Epub 2023 Feb 1.

Abstract

Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional expressions in individuals with hearing impairment based on EEG signals and facial expressions. Two kinds of signals were collected simultaneously when the subjects watched affective video clips, and we labeled the video clips with discrete emotional states (fear, happiness, calmness, and sadness). We extracted the differential entropy (DE) features based on EEG signals and converted DE features into EEG topographic maps (ETM). Next, the ETM and facial expressions were fused by the multichannel fusion method. Finally, a deep learning classifier CBAM_ResNet34 combined Residual Network (ResNet) and Convolutional Block Attention Module (CBAM) was used for subject-dependent emotion classification. The results show that the average classification accuracy of four emotions recognition after multimodal fusion achieves 78.32%, which is higher than 67.90% for facial expressions and 69.43% for EEG signals. Moreover, visualization by the Gradient-weighted Class Activation Mapping (Grad-CAM) of ETM showed that the prefrontal, temporal and occipital lobes were the brain regions closely related to emotional changes in individuals with hearing impairment.

摘要

情感分析已应用于人机交互、康复和神经科学等许多领域。但大多数情感分析方法主要关注健康对照者或抑郁症患者。本文旨在基于脑电图(EEG)信号和面部表情对听力障碍个体的情感表达进行分类。当受试者观看情感视频片段时,同时收集两种信号,并将视频片段标记为离散的情感状态(恐惧、快乐、平静和悲伤)。我们基于EEG信号提取了微分熵(DE)特征,并将DE特征转换为EEG地形图(ETM)。接下来,通过多通道融合方法将ETM和面部表情进行融合。最后,使用结合了残差网络(ResNet)和卷积块注意力模块(CBAM)的深度学习分类器CBAM_ResNet34进行个体依赖的情感分类。结果表明,多模态融合后四种情感识别的平均分类准确率达到78.32%,高于面部表情的67.90%和EEG信号的69.43%。此外,通过ETM的梯度加权类激活映射(Grad-CAM)进行的可视化显示,前额叶、颞叶和枕叶是与听力障碍个体情感变化密切相关的脑区。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验