Salakapuri Rakesh, Navuri Naveen Kumar, Vobbilineni Thrimurthulu, Ravi G, Karmakonda Karthik, Vardhan K Asish
Symbiosis Institute of Technology, Hyderabad Campus, Symbiosis International (Deemed University), Pune, India.
Department of Computer Science & Engineering, Malla Reddy University, Hyderabad, India.
Sci Rep. 2025 Jul 11;15(1):25125. doi: 10.1038/s41598-025-08475-4.
Most accidents are a result of distractions while driving and road user's safety is a global concern. The proposed approach integrates advanced deep learning for driver distraction detection with real-time road object recognition to jointly address this problem. The behaviour of a driver is categorized into physical and visual distraction and cognitive distraction using Convolutional Neural Networks (CNN's) and transfer learning in order to achieve greater accuracy while also consuming lesser computational resources. The YOLO (You Only Look Once) detects vehicles, pedestrians, lane markers, and traffic signals, in real-time. This system has (1) distraction detection and (2) output (or scenario) recognition, which combine other systems and codes to evaluate driving scenarios. A decision-making module evaluates the combined data to assess danger levels and prompt either timely warnings or corrective actions. Our integrated solution will enable a fully-context capable Advanced Driver Assistance System (ADAS) to warn drivers of distractions and hazards, and increases overall situational awareness and reduces accidents. The methodology is supplemented by annotated pictures and videos of driver behavior and road situations in the rain, fog, and low-light scenarios. System reliability under a range of driving scenarios is achieved through data augmentation, model optimization, and transfer learning. State Farm Distracted Driver Dataset, KITTI and MS COCO benchmarks demonstrated better accuracy and efficiency. Integrating existing systems that monitor drivers with systems that are aware of the road would create a multi-target, comprehensive solution that makes both driving safer and helps build upon existing ADAS technology for the better. A real-time and scalable road safety system is established through the integration of CNNs and YOLO deep learning advances. The system's practicality was further validated through real-time embedded deployment on an NVIDIA Jetson Xavier NX platform, achieving 25 frames per second (FPS) with reduced latency and memory footprint, demonstrating feasibility for resource-constrained Advanced Driver Assistance Systems (ADAS). This paper presents a domain-specific driver monitoring module and a knowledge-based road hazard recognition model that better connect autonomous driving to the human side.
大多数事故是驾驶时注意力分散所致,道路使用者的安全是全球关注的问题。所提出的方法将用于驾驶员注意力分散检测的先进深度学习与实时道路物体识别相结合,以共同解决这一问题。利用卷积神经网络(CNN)和迁移学习,将驾驶员的行为分为身体和视觉注意力分散以及认知注意力分散,以便在消耗更少计算资源的同时实现更高的准确性。YOLO(You Only Look Once)实时检测车辆、行人、车道标记和交通信号。该系统具有(1)注意力分散检测和(2)输出(或场景)识别功能,它结合了其他系统和代码来评估驾驶场景。一个决策模块评估合并后的数据,以评估危险级别并及时发出警告或采取纠正措施。我们的集成解决方案将使一个具备全场景能力的先进驾驶员辅助系统(ADAS)能够警告驾驶员注意力分散和危险情况,提高整体态势感知能力并减少事故。该方法通过在雨、雾和低光场景下驾驶员行为和道路状况的带注释图片及视频进行补充。通过数据增强、模型优化和迁移学习,实现了一系列驾驶场景下的系统可靠性。State Farm分心驾驶员数据集、KITTI和MS COCO基准测试显示出了更高的准确性和效率。将现有的驾驶员监测系统与道路感知系统集成,将创建一个多目标、全面的解决方案,既能使驾驶更安全,又有助于在现有ADAS技术基础上进一步改进。通过整合CNN和YOLO深度学习进展,建立了一个实时且可扩展的道路安全系统。通过在NVIDIA Jetson Xavier NX平台上进行实时嵌入式部署,进一步验证了该系统的实用性,实现了每秒25帧(FPS),同时降低了延迟和内存占用,证明了其在资源受限的先进驾驶员辅助系统(ADAS)中的可行性。本文提出了一个特定领域的驾驶员监测模块和一个基于知识的道路危险识别模型,能更好地将自动驾驶与人为因素联系起来。