Suppr超能文献

基于深度神经网络的眼图像、眼动及脑电图多模态情感识别

Multimodal Emotion Recognition from Eye Image, Eye Movement and EEG Using Deep Neural Networks.

作者信息

Guo Jiang-Jian, Zhou Rong, Zhao Li-Ming, Lu Bao-Liang

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:3071-3074. doi: 10.1109/EMBC.2019.8856563.

Abstract

In consideration of the complexity of recording electroencephalography(EEG), some researchers are trying to find new features of emotion recognition. In order to investigate the potential of eye tracking glasses for multimodal emotion recognition, we collect and use eye images to classify five emotions along with eye movements and EEG. We compare four combinations of the three different types of data and two kinds of fusion methods, feature level fusion and Bimodal Deep AutoEncoder (BDAE). According to the three-modality fusion features generated by BDAE, the best mean accuracy of 79.63% is achieved. By analyzing the confusion matrices, we find that the three modalities can provide complementary information for recognizing five emotions. Meanwhile, the experimental results indicate that the classifiers with eye image and eye movement fusion features can achieve a comparable classification accuracy of 71.99%.

摘要

考虑到脑电图(EEG)记录的复杂性,一些研究人员正在尝试寻找情绪识别的新特征。为了研究眼动追踪眼镜在多模态情绪识别中的潜力,我们收集并使用眼部图像,结合眼动和脑电图对五种情绪进行分类。我们比较了三种不同类型数据的四种组合以及两种融合方法,即特征级融合和双峰深度自动编码器(BDAE)。根据BDAE生成的三模态融合特征,实现了79.63%的最佳平均准确率。通过分析混淆矩阵,我们发现这三种模态可以为识别五种情绪提供互补信息。同时,实验结果表明,具有眼部图像和眼动融合特征的分类器可以达到71.99%的可比分类准确率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验