Suppr超能文献

在重叠多摄像头十字路口场景中使用深度卷积神经网络进行3D车辆轨迹提取

3D Vehicle Trajectory Extraction Using DCNN in an Overlapping Multi-Camera Crossroad Scene.

作者信息

Heo Jinyeong, Kwon Yongjin James

机构信息

Department of Industrial Engineering, Ajou University, Suwon 16499, Korea.

出版信息

Sensors (Basel). 2021 Nov 26;21(23):7879. doi: 10.3390/s21237879.

Abstract

The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.

摘要

在诸如十字路口和交通拥堵等复杂交通状况下的三维车辆轨迹在自动驾驶中具有实际的重要用途。为了从车辆视角范围为360度的十字路口的透视相机中准确提取三维车辆轨迹,必须解决单相机场景中视角狭窄、相机视角较低时车辆遮挡以及缺乏车辆物理信息等问题。在本文中,我们提出了一种在重叠多相机十字路口场景中使用深度卷积神经网络(DCNN)估计车辆三维边界框并提取轨迹的方法。首先,使用重叠多相机收集交通数据,以获取十字路口周围广泛的轨迹。然后,通过结合相机校准的DCNN模型(YOLOv4、多分支CNN)在每个单相机场景中估计并跟踪车辆的三维边界框。利用上述信息,通过用单应性矩阵计算重叠多相机得到的结果,可以在十字路口的地面平面上提取三维车辆轨迹。最后,在实验中,通过简单的线性插值和回归对提取轨迹的误差进行校正,并通过计算与地面真值数据的差异验证了所提方法的准确性。与其他先前报道的方法相比,我们的方法显示出更高的准确性和实用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验