Suppr超能文献

基于强化学习的超启发式方法综述。

A review of reinforcement learning based hyper-heuristics.

作者信息

Li Cuixia, Wei Xiang, Wang Jing, Wang Shuozhe, Zhang Shuyan

机构信息

School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou, Henan, China.

出版信息

PeerJ Comput Sci. 2024 Jun 28;10:e2141. doi: 10.7717/peerj-cs.2141. eCollection 2024.

Abstract

The reinforcement learning based hyper-heuristics (RL-HH) is a popular trend in the field of optimization. RL-HH combines the global search ability of hyper-heuristics (HH) with the learning ability of reinforcement learning (RL). This synergy allows the agent to dynamically adjust its own strategy, leading to a gradual optimization of the solution. Existing researches have shown the effectiveness of RL-HH in solving complex real-world problems. However, a comprehensive introduction and summary of the RL-HH field is still blank. This research reviews currently existing RL-HHs and presents a general framework for RL-HHs. This article categorizes the type of algorithms into two categories: value-based reinforcement learning hyper-heuristics and policy-based reinforcement learning hyper-heuristics. Typical algorithms in each category are summarized and described in detail. Finally, the shortcomings in existing researches on RL-HH and future research directions are discussed.

摘要

基于强化学习的超启发式算法(RL-HH)是优化领域的一个流行趋势。RL-HH将超启发式算法(HH)的全局搜索能力与强化学习(RL)的学习能力相结合。这种协同作用使智能体能够动态调整自身策略,从而逐步优化解决方案。现有研究已表明RL-HH在解决复杂现实世界问题方面的有效性。然而,对RL-HH领域的全面介绍和总结仍然空白。本研究回顾了当前现有的RL-HH,并提出了RL-HH的通用框架。本文将算法类型分为两类:基于值的强化学习超启发式算法和基于策略的强化学习超启发式算法。对每类中的典型算法进行了总结并详细描述。最后,讨论了RL-HH现有研究中的不足和未来的研究方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ab6/11232579/6c194ca99f86/peerj-cs-10-2141-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验