Department of Electrical Engineering, the University of Texas at San Antonio, 1, UTSA Cir., San Antonio, TX 78249, USA.
Sensors (Basel). 2020 Apr 12;20(8):2180. doi: 10.3390/s20082180.
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.
本文专注于数据融合,这是任何自主系统中最重要的模块之一:感知。在过去的十年中,智能/自主移动系统的使用呈爆炸式增长。此类系统可用于生活的各个领域,例如为残疾人士、老年人等提供安全的出行服务,并且依赖于准确的传感器信息以实现最佳运行。这些信息可能来自单个传感器,也可能来自具有相同或不同模态的一系列传感器。我们回顾了各种类型的传感器、它们的数据,以及将数据彼此融合以输出最适合手头任务的数据的需求,在这种情况下,任务是自主导航。为了获得如此准确的数据,我们需要有最佳的技术来读取传感器数据、处理数据、消除或至少减少噪声,然后将数据用于所需的任务。我们对当前的数据处理技术进行了调查,这些技术使用不同的传感器(如使用光扫描技术的激光雷达、立体/深度相机、使用光学技术的红绿蓝单目 (RGB) 和飞行时间 (TOF) 相机)来实现数据融合,并回顾了在自主导航任务(如地图绘制、障碍物检测和避免或定位)中使用来自多个传感器的融合数据而不是单个传感器的效率。本调查将为有意完成机器人运动控制任务的研究人员提供传感器信息,并详细介绍使用激光雷达和相机来完成机器人导航。