Institute for Digital Technologies, Loughborough University, London E15 2GZ, UK.
Sensors (Basel). 2018 Aug 20;18(8):2730. doi: 10.3390/s18082730.
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.
自主机器人在日常生活任务中协助人类,越来越受到欢迎。自主移动机器人通过感知和感知其周围环境来做出准确的驾驶决策。结合几种不同的传感器,如激光雷达、雷达、超声传感器和摄像机,用于感知自主车辆的周围环境。这些异构传感器同时捕获环境的各种物理属性。需要积极利用这种多模态和冗余的传感,通过传感器数据融合实现对环境的可靠和一致感知。然而,这些多模态传感器数据流在许多方面彼此不同,例如时间和空间分辨率、数据格式和几何对准。为了让后续的感知算法利用多模态传感提供的多样性,数据流需要在空间、几何和时间上彼此对准。在本文中,我们解决了融合激光雷达(LiDAR)扫描仪和广角单目图像传感器的输出以进行自由空间检测的问题。LiDAR 扫描仪和图像传感器的输出具有不同的空间分辨率,需要彼此对准。使用几何模型来空间对准两个传感器的输出,然后使用基于高斯过程(GP)回归的分辨率匹配算法来以可量化的不确定性内插缺失数据。结果表明,所提出的传感器数据融合框架显著有助于后续的感知步骤,如具有不确定性感知的自由空间检测算法的性能改进所示。