Knowledge Technology Group, Department of Computer Science, University of Hamburg Hamburg, Germany.
Front Neurorobot. 2013 Oct 7;7:15. doi: 10.3389/fnbot.2013.00015. eCollection 2013.
As a fundamental research topic, autonomous indoor robot navigation continues to be a challenge in unconstrained real-world indoor environments. Although many models for map-building and planning exist, it is difficult to integrate them due to the high amount of noise, dynamics, and complexity. Addressing this challenge, this paper describes a neural model for environment mapping and robot navigation based on learning spatial knowledge. Considering that a person typically moves within a room without colliding with objects, this model learns the spatial knowledge by observing the person's movement using a ceiling-mounted camera. A robot can plan and navigate to any given position in the room based on the acquired map, and adapt it based on having identified possible obstacles. In addition, salient visual features are learned and stored in the map during navigation. This anchoring of visual features in the map enables the robot to find and navigate to a target object by showing an image of it. We implement this model on a humanoid robot and tests are conducted in a home-like environment. Results of our experiments show that the learned sensorimotor map masters complex navigation tasks.
作为一项基础研究课题,自主室内机器人导航在无约束的现实室内环境中仍然是一个挑战。尽管存在许多用于地图构建和规划的模型,但由于噪声、动态和复杂性很高,很难将它们集成在一起。针对这一挑战,本文描述了一种基于学习空间知识的环境映射和机器人导航的神经模型。考虑到人通常在房间内移动而不会与物体碰撞,该模型通过使用天花板安装的相机观察人的运动来学习空间知识。机器人可以根据获取的地图规划并导航到房间中的任何给定位置,并根据识别到的可能障碍物对其进行调整。此外,在导航过程中学习并存储地图中的显著视觉特征。这些视觉特征在地图中的锚定使机器人能够通过显示目标物体的图像来找到并导航到它。我们在人形机器人上实现了这个模型,并在类似家庭的环境中进行了测试。实验结果表明,所学习的感觉运动地图掌握了复杂的导航任务。