Institut Jean Lamour, Université de Lorraine, UMR7198, F-54052 Nancy, France.
Independent Researcher, F-57155 Marly, France.
Sensors (Basel). 2023 Feb 27;23(5):2637. doi: 10.3390/s23052637.
In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation Satellite System (GNSS) coordinates along with the speed of the moving objects. The camera to world transform incorporates the lens distortion function. YOLOv4, re-trained with ortho-photographic fisheye images, provides road user detection. All the information extracted from the image by our system represents a small payload and can easily be broadcast to the road users. The results show that our system is able to properly classify and localize the detected objects in real time, even in low-light-illumination conditions. For an effective observation area of 20 m × 50 m, the error of the localization is in the order of one meter. Although an estimation of the velocities of the detected objects is carried out by offline processing with the FlowNet2 algorithm, the accuracy is quite good, with an error below one meter per second for urban speed range (0 to 15 m/s). Moreover, the almost ortho-photographic configuration of the imaging system ensures that the anonymity of all street users is guaranteed.
在本文中,我们提出了一个深度学习处理流程,旨在为城市道路使用者提供高级驾驶辅助系统(ADAS)。我们使用了对鱼眼相机的光学设置的精细分析,并提出了一种详细的程序来获得全球导航卫星系统(GNSS)坐标以及移动物体的速度。相机到世界的转换包含了镜头失真函数。重新训练的 YOLOv4 使用正交摄影鱼眼图像进行道路使用者检测。我们系统从图像中提取的所有信息代表一个小的有效负载,可以轻松地广播给道路使用者。结果表明,我们的系统能够实时正确地分类和定位检测到的物体,即使在低光照条件下也是如此。对于 20m×50m 的有效观察区域,定位误差在一米以内。虽然使用 FlowNet2 算法进行离线处理可以估计检测到的物体的速度,但精度相当好,城市速度范围内(0 至 15m/s)的误差低于一米每秒。此外,成像系统的近乎正交摄影配置确保了所有街道使用者的匿名性得到保证。