Suppr超能文献

基于深度神经网络的姿态、光照和遮挡不变驾驶员情绪检测方法。

Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection.

机构信息

Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA.

Department of Computer Science, William Paterson University, Wayne, NJ 07470, USA.

出版信息

Int J Environ Res Public Health. 2022 Feb 18;19(4):2352. doi: 10.3390/ijerph19042352.

Abstract

Monitoring drivers' emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles' road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers' expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.

摘要

监测驾驶员的情绪是设计智能车辆中先进驾驶员辅助系统(ADAS)的关键方面。为了确保安全并跟踪车辆发生事故的可能性,情绪监测将在证明驾驶员在驾驶车辆时的精神状态方面发挥关键作用。然而,姿势变化、光照条件和遮挡物是影响从适当监测中检测驾驶员情绪的因素。为了克服这些挑战,提出了两种使用机器学习方法和深度神经网络的新方法,以监测不同姿势变化、光照和遮挡物下的各种驾驶员表情。与现有的最先进方法相比,第一种方法在 CK+、FER 2013、KDEF 和 KMU-FED 数据集上分别获得了 93.41%、83.68%、98.47%和 98.18%的显著准确率,而第二种方法在 CK+、FER 2013、KDEF 和 KMU-FED 数据集上分别提高了 96.15%、84.58%、99.18%和 99.09%的准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3d9/8871818/7f8d2fa8e8b9/ijerph-19-02352-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验