• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于花授粉算法和 Q 学习融合算法的搜索救援机器人搜索方法。

A search and rescue robot search method based on flower pollination algorithm and Q-learning fusion algorithm.

机构信息

College of Computer and Control Engineering, Qiqihar University, Qiqihar, China.

College of Telecommunication and Electronic Engineering, Qiqihar University, Qiqihar, China.

出版信息

PLoS One. 2023 Mar 30;18(3):e0283751. doi: 10.1371/journal.pone.0283751. eCollection 2023.

DOI:10.1371/journal.pone.0283751
PMID:36996142
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10062604/
Abstract

Search algorithm plays an important role in the motion planning of the robot, it determines whether the mobile robot complete the task. To solve the search task in complex environments, a fusion algorithm based on the Flower Pollination algorithm and Q-learning is proposed. To improve the accuracy, an improved grid map is used in the section of environment modeling to change the original static grid to a combination of static and dynamic grids. Secondly, a combination of Q-learning and Flower Pollination algorithm is used to complete the initialization of the Q-table and accelerate the efficiency of the search and rescue robot path search. A combination of static and dynamic reward function is proposed for the different situations encountered by the search and rescue robot during the search process, as a way to allow the search and rescue robot to get better different feedback results in each specific situation. The experiments are divided into two parts: typical and improved grid map path planning. Experiments show that the improved grid map can increase the success rate and the FIQL can be used by the search and rescue robot to accomplish the task in a complex environment. Compared with other algorithms, FIQL can reduce the number of iterations, improve the adaptability of the search and rescue robot to complex environments, and have the advantages of short convergence time and small computational effort.

摘要

搜索算法在机器人的运动规划中起着重要作用,它决定了移动机器人是否能够完成任务。为了解决复杂环境中的搜索任务,提出了一种基于花授粉算法和 Q-learning 的融合算法。为了提高精度,在环境建模部分使用了改进的栅格地图,将原始静态栅格转换为静态和动态栅格的组合。其次,将 Q-learning 和花授粉算法相结合,完成 Q 表的初始化,加速搜索和救援机器人路径搜索的效率。针对搜索和救援机器人在搜索过程中遇到的不同情况,提出了静态和动态奖励函数的组合,以使搜索和救援机器人在每种特定情况下获得更好的不同反馈结果。实验分为典型和改进的栅格地图路径规划两部分。实验表明,改进的栅格地图可以提高成功率,FIQL 可用于搜索和救援机器人在复杂环境中完成任务。与其他算法相比,FIQL 可以减少迭代次数,提高搜索和救援机器人对复杂环境的适应性,具有收敛时间短、计算量小的优点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/4748c3549629/pone.0283751.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/035cfff5887c/pone.0283751.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/372940b62ea4/pone.0283751.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/a465fd8a1673/pone.0283751.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/5996ed1d398f/pone.0283751.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/2003b8b7961b/pone.0283751.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/f8fc4e12dd51/pone.0283751.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/bc5ca19f48e0/pone.0283751.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/90ef2c78630e/pone.0283751.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/e1f409e10e1f/pone.0283751.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/bf3144c5c17a/pone.0283751.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/4748c3549629/pone.0283751.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/035cfff5887c/pone.0283751.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/372940b62ea4/pone.0283751.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/a465fd8a1673/pone.0283751.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/5996ed1d398f/pone.0283751.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/2003b8b7961b/pone.0283751.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/f8fc4e12dd51/pone.0283751.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/bc5ca19f48e0/pone.0283751.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/90ef2c78630e/pone.0283751.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/e1f409e10e1f/pone.0283751.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/bf3144c5c17a/pone.0283751.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b2f/10062604/4748c3549629/pone.0283751.g011.jpg

相似文献

1
A search and rescue robot search method based on flower pollination algorithm and Q-learning fusion algorithm.基于花授粉算法和 Q 学习融合算法的搜索救援机器人搜索方法。
PLoS One. 2023 Mar 30;18(3):e0283751. doi: 10.1371/journal.pone.0283751. eCollection 2023.
2
CLSQL: Improved Q-Learning Algorithm Based on Continuous Local Search Policy for Mobile Robot Path Planning.CLSQL:基于连续局部搜索策略的移动机器人路径规划改进Q学习算法
Sensors (Basel). 2022 Aug 8;22(15):5910. doi: 10.3390/s22155910.
3
A Flower Pollination Optimization Algorithm Based on Cosine Cross-Generation Differential Evolution.基于余弦交叉世代差分进化的花授粉优化算法。
Sensors (Basel). 2023 Jan 5;23(2):606. doi: 10.3390/s23020606.
4
An efficient dynamic system for real-time robot-path planning.一种用于实时机器人路径规划的高效动态系统。
IEEE Trans Syst Man Cybern B Cybern. 2006 Aug;36(4):755-66. doi: 10.1109/tsmcb.2005.862724.
5
Grid-Based Mobile Robot Path Planning Using Aging-Based Ant Colony Optimization Algorithm in Static and Dynamic Environments.基于栅格的移动机器人路径规划在静态和动态环境中使用基于老化的蚁群优化算法。
Sensors (Basel). 2020 Mar 28;20(7):1880. doi: 10.3390/s20071880.
6
A path planning approach for mobile robots using short and safe Q-learning.基于短程且安全 Q-学习的移动机器人路径规划方法。
PLoS One. 2022 Sep 26;17(9):e0275100. doi: 10.1371/journal.pone.0275100. eCollection 2022.
7
Mobile robot path planning with reformative bat algorithm.基于改进蝙蝠算法的移动机器人路径规划。
PLoS One. 2022 Nov 4;17(11):e0276577. doi: 10.1371/journal.pone.0276577. eCollection 2022.
8
A novel Q-learning algorithm based on improved whale optimization algorithm for path planning.基于改进鲸鱼优化算法的新型 Q 学习算法在路径规划中的应用。
PLoS One. 2022 Dec 27;17(12):e0279438. doi: 10.1371/journal.pone.0279438. eCollection 2022.
9
Path Planning of Mobile Robot With Improved Ant Colony Algorithm and MDP to Produce Smooth Trajectory in Grid-Based Environment.基于改进蚁群算法和马尔可夫决策过程的移动机器人路径规划,以在基于网格的环境中生成平滑轨迹。
Front Neurorobot. 2020 Jul 9;14:44. doi: 10.3389/fnbot.2020.00044. eCollection 2020.
10
A Path-Planning Approach Based on Potential and Dynamic Q-Learning for Mobile Robots in Unknown Environment.基于势场和动态 Q 学习的未知环境下移动机器人路径规划方法。
Comput Intell Neurosci. 2022 Jun 2;2022:2540546. doi: 10.1155/2022/2540546. eCollection 2022.

引用本文的文献

1
An improved YOLOv8s-based UAV target detection algorithm.一种改进的基于YOLOv8s的无人机目标检测算法。
PLoS One. 2025 Aug 21;20(8):e0327732. doi: 10.1371/journal.pone.0327732. eCollection 2025.
2
A criterion for assessing obstacle-induced environmental complexity in multi-robot coverage exploration.一种用于评估多机器人覆盖探索中障碍物引起的环境复杂性的标准。
PLoS One. 2025 May 16;20(5):e0323112. doi: 10.1371/journal.pone.0323112. eCollection 2025.

本文引用的文献

1
Artificial Intelligence-Based Robust Hybrid Algorithm Design and Implementation for Real-Time Detection of Plant Diseases in Agricultural Environments.基于人工智能的稳健混合算法设计与实现,用于农业环境中植物病害的实时检测
Biology (Basel). 2022 Nov 29;11(12):1732. doi: 10.3390/biology11121732.
2
Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions.自主热成像搜救机器人系统,用于遇难者识别。
Sensors (Basel). 2021 Nov 4;21(21):7346. doi: 10.3390/s21217346.
3
Adaptive-Constrained Impedance Control for Human-Robot Co-Transportation.
自适应约束阻抗控制在人机协同搬运中的应用。
IEEE Trans Cybern. 2022 Dec;52(12):13237-13249. doi: 10.1109/TCYB.2021.3107357. Epub 2022 Nov 18.
4
Development of an Improved Rapidly Exploring Random Trees Algorithm for Static Obstacle Avoidance in Autonomous Vehicles.一种用于自动驾驶车辆静态避障的改进型快速扩展随机树算法的开发。
Sensors (Basel). 2021 Mar 23;21(6):2244. doi: 10.3390/s21062244.
5
Route searching based on neural networks and heuristic reinforcement learning.基于神经网络和启发式强化学习的路径搜索
Cogn Neurodyn. 2017 Jun;11(3):245-258. doi: 10.1007/s11571-017-9423-7. Epub 2017 Feb 9.
6
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.