CAS Key Laboratory of On-Orbit Manufacturing and Integration for Space Optics System, Chinese Academy of Sciences, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
Research Center for Materials and Optoelectronics, University of Chinese Academy of Sciences, Beijing 100049, China.
Sensors (Basel). 2023 May 11;23(10):4647. doi: 10.3390/s23104647.
Complete coverage path planning requires that the mobile robot traverse all reachable positions in the environmental map. Aiming at the problems of local optimal path and high path coverage ratio in the complete coverage path planning of the traditional biologically inspired neural network algorithm, a complete coverage path planning algorithm based on Q-learning is proposed. The global environment information is introduced by the reinforcement learning method in the proposed algorithm. In addition, the Q-learning method is used for path planning at the positions where the accessible path points are changed, which optimizes the path planning strategy of the original algorithm near these obstacles. Simulation results show that the algorithm can automatically generate an orderly path in the environmental map, and achieve 100% coverage with a lower path repetition ratio.
完整覆盖路径规划要求移动机器人遍历环境地图中的所有可到达位置。针对传统生物启发神经网络算法完整覆盖路径规划中存在的局部最优路径和高路径覆盖率问题,提出了一种基于 Q-learning 的完整覆盖路径规划算法。该算法通过强化学习方法引入全局环境信息。此外,在可访问路径点发生变化的位置使用 Q-learning 方法进行路径规划,优化了原始算法在这些障碍物附近的路径规划策略。仿真结果表明,该算法可以在环境地图中自动生成有序路径,并以较低的路径重复率实现 100%的覆盖率。