Ge Gengyu, Zhang Yi, Jiang Qin, Wang Wei
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
Advanced Manufacturing and Automatization Engineering Laboratory, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
Sensors (Basel). 2021 Mar 4;21(5):1772. doi: 10.3390/s21051772.
Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot's orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.
通过使用各种二维激光测距仪同时定位与地图构建(SLAM)方法,已解决了在非对称环境中估计机器人位置和方向的定位问题。基于激光的SLAM生成占用网格地图,然后最流行的蒙特卡洛定位(MCL)方法在地图上散布粒子,并通过概率算法计算机器人的位置。然而,这可能会很困难,尤其是在对称环境中,因为地标或特征可能不足以确定机器人的方向。如果机器人不位于几何中心,有时位置不是唯一的。本文提出了一种使用视觉特征辅助方法解决对称环境中机器人定位问题的新方法。激光测距测量用于估计机器人位置,而视觉特征确定其方向。首先,我们将激光测距扫描原始数据转换为坐标数据并计算几何中心。其次,我们计算从几何中心点到所有端点的新距离,并找到最长距离。然后,我们比较这些距离,拟合直线,提取角点,并计算相邻角点之间的距离以确定环境是否对称。最后,如果环境是对称的,基于ORB关键点检测器和描述符的视觉特征将被添加到系统中以确定机器人的方向。实验结果表明,我们的方法可以成功地在对称环境中确定机器人的位置,而普通的MCL及其扩展定位方法总是失败。