Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, Frankfurt am Main, Hessen, Germany.
Honda Research Institute Europe, Offenbach am Main, Hessen, Germany.
PLoS One. 2018 Sep 21;13(9):e0203994. doi: 10.1371/journal.pone.0203994. eCollection 2018.
We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios.
我们提出了一种基于生物学的视觉自定位模型,该模型通过使用单一的无监督学习规则,直接从高维图像数据中提取环境的空间表示。得到的表示将相机的位置编码为缓慢变化的特征,同时对其方向保持不变,类似于啮齿动物海马体中的位置细胞。使用全方位镜子可以通过添加模拟的旋转运动来操纵图像统计信息,从而提高方向不变性。我们在室内和室外实验中应用了该模型,并首次将其性能与两种最先进的视觉 SLAM 方法进行了比较。实验结果表明,所提出的简单模型能够实现精确的自定位,精度在 13-33cm 范围内,证明了其在测试场景中与已建立的 SLAM 方法的竞争力。