Suppr超能文献

基于深度强化学习的未知环境导航自动探索。

Deep Reinforcement Learning-Based Automatic Exploration for Navigation in Unknown Environment.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):2064-2076. doi: 10.1109/TNNLS.2019.2927869. Epub 2019 Aug 6.

Abstract

This paper investigates the automatic exploration problem under the unknown environment, which is the key point of applying the robotic system to some social tasks. The solution to this problem via stacking decision rules is impossible to cover various environments and sensor properties. Learning-based control methods are adaptive for these scenarios. However, these methods are damaged by low learning efficiency and awkward transferability from simulation to reality. In this paper, we construct a general exploration framework via decomposing the exploration process into the decision, planning, and mapping modules, which increases the modularity of the robotic system. Based on this framework, we propose a deep reinforcement learning-based decision algorithm that uses a deep neural network to learning exploration strategy from the partial map. The results show that this proposed algorithm has better learning efficiency and adaptability for unknown environments. In addition, we conduct the experiments on the physical robot, and the results suggest that the learned policy can be well transferred from simulation to the real robot.

摘要

本文研究了未知环境下的自动探索问题,这是将机器人系统应用于某些社会任务的关键点。通过堆叠决策规则来解决这个问题,不可能覆盖各种环境和传感器特性。基于学习的控制方法适用于这些场景。然而,这些方法受到学习效率低和从模拟到现实的迁移性差的影响。在本文中,我们通过将探索过程分解为决策、规划和映射模块来构建一个通用的探索框架,从而提高机器人系统的模块化。基于这个框架,我们提出了一种基于深度强化学习的决策算法,该算法使用深度神经网络从部分地图中学习探索策略。结果表明,该算法对未知环境具有更好的学习效率和适应性。此外,我们在物理机器人上进行了实验,结果表明,从模拟到真实机器人,学习到的策略可以很好地迁移。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验