Su Yuanyuan, Li Wenchao, Bi Ning, Lv Zhao
Department of Design, Anhui University, Hefei, China.
College of Design, Iowa State University, Ames, IA, United States.
Front Neurorobot. 2019 Jun 26;13:46. doi: 10.3389/fnbot.2019.00046. eCollection 2019.
Giving a robot the ability to perceive emotion in its environment can improve human-robot interaction (HRI), thereby facilitating more human-like communication. To achieve emotion recognition in different built environments for adolescents, we propose a multi-modal emotion intensity perception method using an integration of electroencephalography (EEG) and eye movement information. Specifically, we first develop a new stimulus video selection method based on computation of normalized arousal and valence scores according to subjective feedback from participants. Then, we establish a valence perception sub-model and an arousal sub-model by collecting and analyzing emotional EEG and eye movement signals, respectively. We employ this dual recognition method to perceive emotional intensities synchronously in two dimensions. In the laboratory environment, the best recognition accuracies of the modality fusion for the arousal and valence dimensions are 72.8 and 69.3%. The experimental results validate the feasibility of the proposed multi-modal emotion recognition method for environment emotion intensity perception. This promising tool not only achieves more accurate emotion perception for HRI systems but also provides an alternative approach to quantitatively assess environmental psychology.
赋予机器人感知其环境中情感的能力可以改善人机交互(HRI),从而促进更像人类的交流。为了在不同的青少年建筑环境中实现情感识别,我们提出了一种融合脑电图(EEG)和眼动信息的多模态情感强度感知方法。具体而言,我们首先根据参与者的主观反馈,通过计算标准化唤醒和效价分数,开发了一种新的刺激视频选择方法。然后,我们分别通过收集和分析情感EEG和眼动信号,建立了效价感知子模型和唤醒子模型。我们采用这种双重识别方法在两个维度上同步感知情感强度。在实验室环境中,唤醒和效价维度的模态融合最佳识别准确率分别为72.8%和69.3%。实验结果验证了所提出的用于环境情感强度感知的多模态情感识别方法的可行性。这个有前景的工具不仅为HRI系统实现了更准确的情感感知,还提供了一种定量评估环境心理学的替代方法。