Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea.
Department of Automobile and IT Convergence, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea.
Sensors (Basel). 2021 Mar 19;21(6):2166. doi: 10.3390/s21062166.
In intelligent vehicles, it is essential to monitor the driver's condition; however, recognizing the driver's emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver's emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver's real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers' real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver's real emotional state. Hence, we categorized the driver's emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver's induced emotion while driving situation.
在智能车辆中,监测驾驶员的状态至关重要;然而,识别驾驶员的情绪状态是最具挑战性和最重要的任务之一。大多数先前的研究都集中在面部表情识别上,以监测驾驶员的情绪状态。然而,在驾驶过程中,许多因素阻止驾驶员在脸上表现出情绪。为了解决这个问题,我们提出了一种基于深度学习的驾驶员真实情感识别器(DRER),这是一种基于深度学习的算法,可以识别驾驶员无法完全根据面部表情识别的真实情感。所提出的算法包括两个模型:(i)面部表情识别模型,是指最先进的卷积神经网络结构;和(ii)传感器融合情感识别模型,它融合了识别到的面部表情状态和皮肤电活动,即使是驾驶员的真实情绪状态,也可以通过这种生物生理信号来识别。因此,我们对驾驶员的情绪进行了分类,并进行了人机交互实验来获取数据。实验结果表明,与仅使用面部表情相比,所提出的融合方法的准确性提高了 114%,与仅使用皮肤电活动相比,准确性提高了 146%。总之,我们提出的方法在识别驾驶员在驾驶情况下诱导的情绪时,识别准确率达到 86.8%。