Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
Science and Technology on Space Intelligent Control Laboratory, Beijing Institute of Control Engineering, Beijing 100094, China.
Sensors (Basel). 2022 Mar 23;22(7):2453. doi: 10.3390/s22072453.
The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.
感知模块在配备先进驾驶员辅助系统(ADAS)的车辆中起着重要作用。本文提出了一种基于偏振彩色立体相机和前视光检测和测距(LiDAR)的多传感器数据融合系统,实现了多个目标的检测、识别和数据融合。利用 You Only Look Once v4(YOLOv4)网络对彩色图像进行目标检测和识别。深度图像是基于极线约束原理从校正后的左右图像中获得的,然后使用 MeanShift 算法从深度图像中检测障碍物。从原始偏振灰度图像中提取像素级偏振图像,然后成功检测到水害。使用 PointPillars 网络从点云中检测物体。完成了传感器的校准和同步。实验结果表明,数据融合丰富了检测结果,提供了高维感知信息并扩展了有效检测范围。同时,在不同的距离和光照条件下,检测结果也很稳定。