• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于标定鱼眼视频的深度学习的道路使用者位置和速度估计。

Road User Position and Speed Estimation via Deep Learning from Calibrated Fisheye Videos.

机构信息

Institut Jean Lamour, Université de Lorraine, UMR7198, F-54052 Nancy, France.

Independent Researcher, F-57155 Marly, France.

出版信息

Sensors (Basel). 2023 Feb 27;23(5):2637. doi: 10.3390/s23052637.

DOI:10.3390/s23052637
PMID:36904841
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10007371/
Abstract

In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation Satellite System (GNSS) coordinates along with the speed of the moving objects. The camera to world transform incorporates the lens distortion function. YOLOv4, re-trained with ortho-photographic fisheye images, provides road user detection. All the information extracted from the image by our system represents a small payload and can easily be broadcast to the road users. The results show that our system is able to properly classify and localize the detected objects in real time, even in low-light-illumination conditions. For an effective observation area of 20 m × 50 m, the error of the localization is in the order of one meter. Although an estimation of the velocities of the detected objects is carried out by offline processing with the FlowNet2 algorithm, the accuracy is quite good, with an error below one meter per second for urban speed range (0 to 15 m/s). Moreover, the almost ortho-photographic configuration of the imaging system ensures that the anonymity of all street users is guaranteed.

摘要

在本文中,我们提出了一个深度学习处理流程,旨在为城市道路使用者提供高级驾驶辅助系统(ADAS)。我们使用了对鱼眼相机的光学设置的精细分析,并提出了一种详细的程序来获得全球导航卫星系统(GNSS)坐标以及移动物体的速度。相机到世界的转换包含了镜头失真函数。重新训练的 YOLOv4 使用正交摄影鱼眼图像进行道路使用者检测。我们系统从图像中提取的所有信息代表一个小的有效负载,可以轻松地广播给道路使用者。结果表明,我们的系统能够实时正确地分类和定位检测到的物体,即使在低光照条件下也是如此。对于 20m×50m 的有效观察区域,定位误差在一米以内。虽然使用 FlowNet2 算法进行离线处理可以估计检测到的物体的速度,但精度相当好,城市速度范围内(0 至 15m/s)的误差低于一米每秒。此外,成像系统的近乎正交摄影配置确保了所有街道使用者的匿名性得到保证。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/880e22706137/sensors-23-02637-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/e34a055fb827/sensors-23-02637-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/d20a2e4645df/sensors-23-02637-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/71b65730cf98/sensors-23-02637-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/1ed2cd94f053/sensors-23-02637-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/b48caaad4d71/sensors-23-02637-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/c6c49261c740/sensors-23-02637-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/714325aff923/sensors-23-02637-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/26ac543b2286/sensors-23-02637-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/5c0140f0a92b/sensors-23-02637-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/0b1a5fe0b67f/sensors-23-02637-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/148decf4befc/sensors-23-02637-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/f3ca3683020b/sensors-23-02637-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/971876ce03ec/sensors-23-02637-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/e10781173bc5/sensors-23-02637-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/441ace1f8f0a/sensors-23-02637-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/2fbe2c258d33/sensors-23-02637-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/f868fa108d9c/sensors-23-02637-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/880e22706137/sensors-23-02637-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/e34a055fb827/sensors-23-02637-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/d20a2e4645df/sensors-23-02637-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/71b65730cf98/sensors-23-02637-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/1ed2cd94f053/sensors-23-02637-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/b48caaad4d71/sensors-23-02637-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/c6c49261c740/sensors-23-02637-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/714325aff923/sensors-23-02637-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/26ac543b2286/sensors-23-02637-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/5c0140f0a92b/sensors-23-02637-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/0b1a5fe0b67f/sensors-23-02637-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/148decf4befc/sensors-23-02637-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/f3ca3683020b/sensors-23-02637-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/971876ce03ec/sensors-23-02637-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/e10781173bc5/sensors-23-02637-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/441ace1f8f0a/sensors-23-02637-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/2fbe2c258d33/sensors-23-02637-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/f868fa108d9c/sensors-23-02637-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ff/10007371/880e22706137/sensors-23-02637-g018.jpg

相似文献

1
Road User Position and Speed Estimation via Deep Learning from Calibrated Fisheye Videos.基于标定鱼眼视频的深度学习的道路使用者位置和速度估计。
Sensors (Basel). 2023 Feb 27;23(5):2637. doi: 10.3390/s23052637.
2
Infrastructure-Based Vehicle Localization through Camera Calibration for I2V Communication Warning.基于基础设施的车辆定位:通过相机校准实现车对基础设施通信警告
Sensors (Basel). 2023 Aug 12;23(16):7136. doi: 10.3390/s23167136.
3
Skymask Matching Aided Positioning Using Sky-Pointing Fisheye Camera and 3D City Models in Urban Canyons.利用指向天空的鱼眼相机和城市峡谷中的三维城市模型进行天空掩码匹配辅助定位
Sensors (Basel). 2020 Aug 21;20(17):4728. doi: 10.3390/s20174728.
4
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.在城市区域中使用指向天空相机的紧密耦合全球导航卫星系统/视觉技术用于车辆导航
Sensors (Basel). 2018 Apr 17;18(4):1244. doi: 10.3390/s18041244.
5
Real-Time Semantic Segmentation for Fisheye Urban Driving Images Based on ERFNet.基于 ERFNet 的鱼眼城市驾驶图像实时语义分割。
Sensors (Basel). 2019 Jan 25;19(3):503. doi: 10.3390/s19030503.
6
Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN.基于热成像相机和 Faster R-CNN 的不同光照条件下果园自主作业中树干识别
Sensors (Basel). 2022 Mar 7;22(5):2065. doi: 10.3390/s22052065.
7
An Extended Kalman Filter and Back Propagation Neural Network Algorithm Positioning Method Based on Anti-lock Brake Sensor and Global Navigation Satellite System Information.基于防抱死传感器和全球导航卫星系统信息的扩展卡尔曼滤波和反向传播神经网络算法定位方法。
Sensors (Basel). 2018 Aug 21;18(9):2753. doi: 10.3390/s18092753.
8
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area.城区中杆状物体提取及基于杆辅助的GNSS/IMU/LiDAR-SLAM系统
Sensors (Basel). 2020 Dec 13;20(24):7145. doi: 10.3390/s20247145.
9
Design of a Miniaturized Wide-Angle Fisheye Lens Based on Deep Learning and Optimization Techniques.基于深度学习与优化技术的小型化广角鱼眼镜头设计
Micromachines (Basel). 2022 Aug 27;13(9):1409. doi: 10.3390/mi13091409.
10
Object Detection, Recognition, and Tracking Algorithms for ADASs-A Study on Recent Trends.用于高级驾驶辅助系统的目标检测、识别和跟踪算法——近期趋势研究
Sensors (Basel). 2023 Dec 31;24(1):249. doi: 10.3390/s24010249.

本文引用的文献

1
Communication Network Architectures for Driver Assistance Systems.驾驶员辅助系统的通信网络架构。
Sensors (Basel). 2021 Oct 16;21(20):6867. doi: 10.3390/s21206867.
2
An Approach to Segment and Track-Based Pedestrian Detection from Four-Layer Laser Scanner Data.基于四层激光扫描仪数据的分段和基于跟踪的行人检测方法。
Sensors (Basel). 2019 Dec 11;19(24):5450. doi: 10.3390/s19245450.