Suppr超能文献

基于慢特征分析的真实场景生物启发式视觉自定位。

Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis.

机构信息

Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, Frankfurt am Main, Hessen, Germany.

Honda Research Institute Europe, Offenbach am Main, Hessen, Germany.

出版信息

PLoS One. 2018 Sep 21;13(9):e0203994. doi: 10.1371/journal.pone.0203994. eCollection 2018.

Abstract

We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios.

摘要

我们提出了一种基于生物学的视觉自定位模型,该模型通过使用单一的无监督学习规则,直接从高维图像数据中提取环境的空间表示。得到的表示将相机的位置编码为缓慢变化的特征,同时对其方向保持不变,类似于啮齿动物海马体中的位置细胞。使用全方位镜子可以通过添加模拟的旋转运动来操纵图像统计信息,从而提高方向不变性。我们在室内和室外实验中应用了该模型,并首次将其性能与两种最先进的视觉 SLAM 方法进行了比较。实验结果表明,所提出的简单模型能够实现精确的自定位,精度在 13-33cm 范围内,证明了其在测试场景中与已建立的 SLAM 方法的竞争力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验