Suppr超能文献

具有近似误差的哈密顿驱动自适应动态规划

Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors.

作者信息

Yang Yongliang, Modares Hamidreza, Vamvoudakis Kyriakos G, He Wei, Xu Cheng-Zhong, Wunsch Donald C

出版信息

IEEE Trans Cybern. 2022 Dec;52(12):13762-13773. doi: 10.1109/TCYB.2021.3108034. Epub 2022 Nov 18.

Abstract

In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton-Jacobi-Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, "min-Hamiltonian," is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.

摘要

在本文中,我们考虑哈密顿驱动框架内的一种迭代自适应动态规划(ADP)算法,以求解非线性系统连续时间无限时域最优控制问题的哈密顿 - 雅可比 - 贝尔曼(HJB)方程。首先,定义了一个新颖的函数“最小哈密顿量”来捕捉经典哈密顿量的基本性质。结果表明,在哈密顿驱动框架内,HJB方程和策略迭代(PI)算法都可以根据最小哈密顿量来表述。此外,我们开发了一种迭代ADP算法,该算法在策略评估步骤中考虑了近似误差。然后,我们推导了迭代值梯度的一个充分条件,以保证平衡点的闭环稳定性以及收敛到最优值。还提供了一种基于离策略强化学习(RL)技术的无模型扩展。最后,数值结果说明了所提出框架的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验