Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic.
Department of Computer Science, University of Nottingham, Jubilee Campus, 7301 Wollaton Rd, Lenton, Nottingham NG8 1BB, UK.
Sensors (Basel). 2022 Apr 13;22(8):2975. doi: 10.3390/s22082975.
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
视觉示教与重复导航(VT&R)因其简单性和多功能性而在机器人领域广受欢迎。它使配备摄像头的移动机器人能够在无需创建全局一致度量地图的情况下穿越学习过的路径。尽管示教和重复框架已被证明对环境变化具有相对鲁棒性,但它们仍然难以应对昼夜和季节变化。本文旨在找到在引导机器人沿先前遍历的路径行驶时所需的预记录图像和当前感知图像之间的水平位移。我们使用全卷积神经网络来获得对环境变化和光照变化具有鲁棒性的图像的密集表示。所提出的模型在具有季节性和昼夜变化的多个数据集上实现了最先进的性能。此外,我们的实验表明,可以使用该模型生成额外的训练示例,这些示例可用于进一步提高原始模型的鲁棒性。我们还在移动机器人上进行了实际实验,以证明我们的方法适用于 VT&R。