• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的驾驶员注视估计方法。

A Driver Gaze Estimation Method Based on Deep Learning.

机构信息

Information Engineering School, Chang'an University, Xi'an 710061, China.

Department of Computer Science and IT, The University of Agriculture Peshawar, Peshawar 25000, Pakistan.

出版信息

Sensors (Basel). 2022 May 23;22(10):3959. doi: 10.3390/s22103959.

DOI:10.3390/s22103959
PMID:35632365
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9142909/
Abstract

Car crashes are among the top ten leading causes of death; they could mainly be attributed to distracted drivers. An advanced driver-assistance technique (ADAT) is a procedure that can notify the driver about a dangerous scenario, reduce traffic crashes, and improve road safety. The main contribution of this work involved utilizing the driver's attention to build an efficient ADAT. To obtain this "attention value", the gaze tracking method is proposed. The gaze direction of the driver is critical toward understanding/discerning fatal distractions, pertaining to when it is obligatory to notify the driver about the risks on the road. A real-time gaze tracking system is proposed in this paper for the development of an ADAT that obtains and communicates the gaze information of the driver. The developed ADAT system detects various head poses of the driver and estimates eye gaze directions, which play important roles in assisting the driver and avoiding any unwanted circumstances. The first (and more significant) task in this research work involved the development of a benchmark image dataset consisting of head poses and horizontal and vertical direction gazes of the driver's eyes. To detect the driver's face accurately and efficiently, the You Only Look Once (YOLO-V4) face detector was used by modifying it with the Inception-v3 CNN model for robust feature learning and improved face detection. Finally, transfer learning in the InceptionResNet-v2 CNN model was performed, where the CNN was used as a classification model for head pose detection and eye gaze angle estimation; a regression layer to the InceptionResNet-v2 CNN was added instead of SoftMax and the classification output layer. The proposed model detects and estimates head pose directions and eye directions with higher accuracy. The average accuracy achieved by the head pose detection system was 91%; the model achieved a RMSE of 2.68 for vertical and 3.61 for horizontal eye gaze estimations.

摘要

车祸是十大主要死亡原因之一;它们主要可归因于分心驾驶。先进驾驶辅助技术 (ADAT) 是一种可以通知驾驶员危险情况、减少交通事故和提高道路安全的程序。这项工作的主要贡献涉及利用驾驶员的注意力来构建高效的 ADAT。为了获得这个“注意力值”,提出了视线追踪方法。驾驶员的注视方向对于理解/辨别致命干扰至关重要,这与何时必须通知驾驶员道路上的风险有关。本文提出了一种实时视线追踪系统,用于开发获取和传达驾驶员视线信息的 ADAT。所开发的 ADAT 系统检测驾驶员的各种头部姿势并估计眼睛注视方向,这在协助驾驶员和避免任何意外情况方面发挥着重要作用。这项研究工作的第一个(也是更重要的)任务是开发一个基准图像数据集,该数据集包含驾驶员的头部姿势和水平及垂直方向的眼睛注视。为了准确高效地检测驾驶员的面部,使用了仅看一次(YOLO-V4)面部探测器,并通过使用 Inception-v3 CNN 模型对其进行修改,以进行稳健的特征学习和改进的面部检测。最后,在 InceptionResNet-v2 CNN 模型中进行了迁移学习,其中将 CNN 用作分类模型,用于头部姿势检测和眼睛注视角度估计;将回归层添加到 InceptionResNet-v2 CNN 中,而不是 SoftMax 和分类输出层。所提出的模型以更高的精度检测和估计头部姿势方向和眼睛方向。头部姿势检测系统的平均准确率为 91%;该模型在垂直方向上的 RMSE 为 2.68,在水平方向上的 RMSE 为 3.61。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/a9504e79501f/sensors-22-03959-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9a86608d7ef2/sensors-22-03959-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/3a23ef45da29/sensors-22-03959-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/17ce031c915e/sensors-22-03959-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/dfadaf7c481a/sensors-22-03959-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/2d21a78a76d1/sensors-22-03959-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/e4e18d20c793/sensors-22-03959-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9e5fedf7b074/sensors-22-03959-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9625ff58114b/sensors-22-03959-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9fda485c521c/sensors-22-03959-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/ac3c969979fe/sensors-22-03959-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/4851ab8eb8f3/sensors-22-03959-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/66f241d3130d/sensors-22-03959-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/7a96c12188e4/sensors-22-03959-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/4e3ede36ede8/sensors-22-03959-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/d3838ac478ad/sensors-22-03959-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/23d114347aef/sensors-22-03959-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9dbce7e0ac56/sensors-22-03959-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/a9504e79501f/sensors-22-03959-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9a86608d7ef2/sensors-22-03959-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/3a23ef45da29/sensors-22-03959-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/17ce031c915e/sensors-22-03959-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/dfadaf7c481a/sensors-22-03959-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/2d21a78a76d1/sensors-22-03959-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/e4e18d20c793/sensors-22-03959-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9e5fedf7b074/sensors-22-03959-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9625ff58114b/sensors-22-03959-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9fda485c521c/sensors-22-03959-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/ac3c969979fe/sensors-22-03959-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/4851ab8eb8f3/sensors-22-03959-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/66f241d3130d/sensors-22-03959-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/7a96c12188e4/sensors-22-03959-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/4e3ede36ede8/sensors-22-03959-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/d3838ac478ad/sensors-22-03959-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/23d114347aef/sensors-22-03959-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/9dbce7e0ac56/sensors-22-03959-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/827c/9142909/a9504e79501f/sensors-22-03959-g018.jpg

相似文献

1
A Driver Gaze Estimation Method Based on Deep Learning.基于深度学习的驾驶员注视估计方法。
Sensors (Basel). 2022 May 23;22(10):3959. doi: 10.3390/s22103959.
2
Faster R-CNN and Geometric Transformation-Based Detection of Driver's Eyes Using Multiple Near-Infrared Camera Sensors.基于 Faster R-CNN 和几何变换的使用多个近红外相机传感器的驾驶员眼睛检测。
Sensors (Basel). 2019 Jan 7;19(1):197. doi: 10.3390/s19010197.
3
Continuous Driver's Gaze Zone Estimation Using RGB-D Camera.基于 RGB-D 相机的驾驶员注视区域连续估计
Sensors (Basel). 2019 Mar 14;19(6):1287. doi: 10.3390/s19061287.
4
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.基于深度学习的汽车驾驶员凝视检测系统,使用近红外相机传感器。
Sensors (Basel). 2018 Feb 3;18(2):456. doi: 10.3390/s18020456.
5
Driver's Head Pose and Gaze Zone Estimation Based on Multi-Zone Templates Registration and Multi-Frame Point Cloud Fusion.基于多区域模板配准和多帧点云融合的驾驶员头部姿势和注视区域估计。
Sensors (Basel). 2022 Apr 20;22(9):3154. doi: 10.3390/s22093154.
6
Gaze and Eye Tracking: Techniques and Applications in ADAS.凝视和眼动追踪:ADAS 中的技术与应用。
Sensors (Basel). 2019 Dec 14;19(24):5540. doi: 10.3390/s19245540.
7
Combining head pose and eye location information for gaze estimation.结合头部姿势和眼睛位置信息进行注视估计。
IEEE Trans Image Process. 2012 Feb;21(2):802-15. doi: 10.1109/TIP.2011.2162740. Epub 2011 Jul 22.
8
Vision-Based Driver's Cognitive Load Classification Considering Eye Movement Using Machine Learning and Deep Learning.基于机器学习和深度学习的考虑眼动的基于视觉的驾驶员认知负荷分类。
Sensors (Basel). 2021 Nov 30;21(23):8019. doi: 10.3390/s21238019.
9
Dual-Cameras-Based Driver's Eye Gaze Tracking System with Non-Linear Gaze Point Refinement.基于双摄像头的驾驶员眼动追踪系统,具有非线性眼动点细化。
Sensors (Basel). 2022 Mar 17;22(6):2326. doi: 10.3390/s22062326.
10
A Study on the Gaze Range Calculation Method During an Actual Car Driving Using Eyeball Angle and Head Angle Information.基于眼球角度和头部角度信息的实际驾驶过程中注视范围计算方法研究。
Sensors (Basel). 2019 Nov 2;19(21):4774. doi: 10.3390/s19214774.

引用本文的文献

1
Leveraging deep learning for plant disease and pest detection: a comprehensive review and future directions.利用深度学习进行植物病虫害检测:全面综述与未来方向
Front Plant Sci. 2025 Feb 21;16:1538163. doi: 10.3389/fpls.2025.1538163. eCollection 2025.
2
Implementation of a High-Accuracy Neural Network-Based Pupil Detection System for Real-Time and Real-World Applications.一种基于高精度神经网络的实时实际应用瞳孔检测系统的实现。
Sensors (Basel). 2024 Apr 16;24(8):2548. doi: 10.3390/s24082548.
3
Comprehensive Assessment of Artificial Intelligence Tools for Driver Monitoring and Analyzing Safety Critical Events in Vehicles.

本文引用的文献

1
Gaze and Eye Tracking: Techniques and Applications in ADAS.凝视和眼动追踪:ADAS 中的技术与应用。
Sensors (Basel). 2019 Dec 14;19(24):5540. doi: 10.3390/s19245540.
2
Monitoring driver fatigue using a single-channel electroencephalographic device: A validation study by gaze-based, driving performance, and subjective data.使用单通道脑电图设备监测驾驶员疲劳:基于注视、驾驶表现和主观数据的验证研究。
Accid Anal Prev. 2017 Dec;109:62-69. doi: 10.1016/j.aap.2017.09.025. Epub 2017 Oct 13.
3
Safety-critical event risk associated with cell phone tasks as measured in naturalistic driving studies: A systematic review and meta-analysis.
用于车辆驾驶员监测和安全关键事件分析的人工智能工具综合评估
Sensors (Basel). 2024 Apr 12;24(8):2478. doi: 10.3390/s24082478.
4
Model-Based 3D Gaze Estimation Using a TOF Camera.基于模型的使用飞行时间相机的3D注视估计
Sensors (Basel). 2024 Feb 6;24(4):1070. doi: 10.3390/s24041070.
5
Exploring transfer learning in chest radiographic images within the interplay between COVID-19 and diabetes.探讨 COVID-19 与糖尿病相互作用下的胸部 X 光图像中的迁移学习。
Front Public Health. 2023 Oct 18;11:1297909. doi: 10.3389/fpubh.2023.1297909. eCollection 2023.
6
Advancements in Neighboring-Based Energy-Efficient Routing Protocol (NBEER) for Underwater Wireless Sensor Networks.基于邻居的节能路由协议 (NBEER) 在水下无线传感器网络中的应用进展。
Sensors (Basel). 2023 Jun 29;23(13):6025. doi: 10.3390/s23136025.
7
Improving EEG-Based Driver Distraction Classification Using Brain Connectivity Estimators.利用脑连接估计器提高基于 EEG 的驾驶员注意力分散分类。
Sensors (Basel). 2022 Aug 19;22(16):6230. doi: 10.3390/s22166230.
8
Gaze Estimation Approach Using Deep Differential Residual Network.基于深度差分残差网络的注视估计方法。
Sensors (Basel). 2022 Jul 21;22(14):5462. doi: 10.3390/s22145462.
自然主义驾驶研究中测量的与手机任务相关的安全关键事件风险:一项系统评价和荟萃分析。
Accid Anal Prev. 2016 Feb;87:161-9. doi: 10.1016/j.aap.2015.11.015. Epub 2015 Dec 24.