Suppr超能文献

基于全天候光照场景的驾驶员疲劳状态高效检测

Efficient detection of driver fatigue state based on all-weather illumination scenarios.

作者信息

Hu Siyang, Gao Qihuang, Xie Kai, Wen Chang, Zhang Wei, He Jianbiao

机构信息

School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434023, China.

School of Computer Science, Yangtze University, Jingzhou, 434023, China.

出版信息

Sci Rep. 2024 Jul 24;14(1):17075. doi: 10.1038/s41598-024-67131-5.

Abstract

Among the causes of the annually traffic accidents, driving fatigue is the main culprit. In consequence, it is of great practical significance to carry out the research of driving fatigue detection and early warning system. However, there are still two problems in the latest methods of driving fatigue detection: one is that a single information cannot precisely reflect the actual state of the driver in different fatigue phases, another one is the detection effect is not very well or even difficult to detect under abnormal illumination. In this paper, the multi-task cascaded convolutional networks (MTCNN) and infrared-based remote photo-plethysmography (rPPG) theory are used to extract the driver's facial and physiological information, and the multi-modal specific fatigue information is deeply excavated, and the multi-modal feature fusion model is constructed to comprehensively analyze the driver's fatigue variation tendency. Aiming at the matter of low detection accuracy under abnormal illumination, the multi-modal features extracted from visible light images and infrared images are fused by multi-loss reconstruction (MLR) module, and the driving fatigue detection module is established which is based on Bi-LSTM model by utilizing fatigue timing. The experiments were validated under all-weather illumination scenarios and were carried out on the datasets NTHU-DDD, UTA-RLDDD and FAHD. The results show that the multi-modal driving fatigue detection model has better performance than the single-modal model, and the accuracy is improved by 8.1%. In the abnormal illumination such as strong and weak light, the accuracy of the method can reach 91.7% at the highest and 83.6% at the lowest. Meanwhile, in the normal illumination, it can reach 93.2%.

摘要

在每年交通事故的成因中,驾驶疲劳是主要原因。因此,开展驾驶疲劳检测与预警系统的研究具有重要的现实意义。然而,目前最新的驾驶疲劳检测方法仍存在两个问题:一是单一信息无法精确反映驾驶员在不同疲劳阶段的实际状态,另一个是在异常光照条件下检测效果不佳甚至难以检测。本文利用多任务级联卷积网络(MTCNN)和基于红外的远程光电容积脉搏波描记法(rPPG)理论来提取驾驶员的面部和生理信息,深入挖掘多模态特定疲劳信息,并构建多模态特征融合模型以综合分析驾驶员的疲劳变化趋势。针对异常光照下检测精度低的问题,通过多损失重建(MLR)模块融合从可见光图像和红外图像中提取的多模态特征,并利用疲劳定时建立基于双向长短期记忆(Bi-LSTM)模型的驾驶疲劳检测模块。实验在全天候光照场景下进行,并在NTHU-DDD、UTA-RLDDD和FAHD数据集上开展。结果表明,多模态驾驶疲劳检测模型比单模态模型具有更好的性能,准确率提高了8.1%。在强光和弱光等异常光照条件下,该方法的准确率最高可达91.7%,最低可达83.6%。同时,在正常光照下,准确率可达93.2%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1af/11269596/2af283a2d3a5/41598_2024_67131_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验