Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA.
Department of Computer Science, William Paterson University, Wayne, NJ 07470, USA.
Int J Environ Res Public Health. 2022 Feb 18;19(4):2352. doi: 10.3390/ijerph19042352.
Monitoring drivers' emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles' road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers' expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.
监测驾驶员的情绪是设计智能车辆中先进驾驶员辅助系统(ADAS)的关键方面。为了确保安全并跟踪车辆发生事故的可能性,情绪监测将在证明驾驶员在驾驶车辆时的精神状态方面发挥关键作用。然而,姿势变化、光照条件和遮挡物是影响从适当监测中检测驾驶员情绪的因素。为了克服这些挑战,提出了两种使用机器学习方法和深度神经网络的新方法,以监测不同姿势变化、光照和遮挡物下的各种驾驶员表情。与现有的最先进方法相比,第一种方法在 CK+、FER 2013、KDEF 和 KMU-FED 数据集上分别获得了 93.41%、83.68%、98.47%和 98.18%的显著准确率,而第二种方法在 CK+、FER 2013、KDEF 和 KMU-FED 数据集上分别提高了 96.15%、84.58%、99.18%和 99.09%的准确率。