IEEE Trans Cybern. 2019 Mar;49(3):1110-1122. doi: 10.1109/TCYB.2018.2797176. Epub 2018 Feb 8.
In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.
在本文中,我们提出了一种称为 EmotionMeter 的多模态情感识别框架,它结合了脑电波和眼动数据。为了提高 EmotionMeter 在实际应用中的可行性和可穿戴性,我们设计了一种在耳朵上方的六电极放置方式来采集脑电图(EEG)信号。我们将 EEG 和眼动数据相结合,以整合用户的内部认知状态和外部潜意识行为,从而提高 EmotionMeter 的识别准确性。实验结果表明,与单一模态相比,模态融合的多模态深度神经网络可以显著提高性能,对于四种情感(快乐、悲伤、恐惧和中性),最佳平均准确率达到 85.11%。我们探索了 EEG 和眼动数据的互补特征,以了解它们的表示能力,并确定 EEG 具有分类快乐情感的优势,而眼动数据在识别恐惧情感方面优于 EEG。为了研究 EmotionMeter 随时间的稳定性,每位受试者在不同的日子进行三次实验。使用六电极 EEG 和眼动特征,EmotionMeter 在各个会话中的平均识别准确率为 72.39%。这些实验结果表明了 EmotionMeter 在会话内和会话间的有效性。