Suppr超能文献

具有近似误差的哈密顿驱动自适应动态规划

Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors.

作者信息

Yang Yongliang, Modares Hamidreza, Vamvoudakis Kyriakos G, He Wei, Xu Cheng-Zhong, Wunsch Donald C

出版信息

IEEE Trans Cybern. 2022 Dec;52(12):13762-13773. doi: 10.1109/TCYB.2021.3108034. Epub 2022 Nov 18.

Abstract

In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton-Jacobi-Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, "min-Hamiltonian," is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.

摘要

在本文中,我们考虑哈密顿驱动框架内的一种迭代自适应动态规划(ADP)算法,以求解非线性系统连续时间无限时域最优控制问题的哈密顿 - 雅可比 - 贝尔曼(HJB)方程。首先,定义了一个新颖的函数“最小哈密顿量”来捕捉经典哈密顿量的基本性质。结果表明,在哈密顿驱动框架内,HJB方程和策略迭代(PI)算法都可以根据最小哈密顿量来表述。此外,我们开发了一种迭代ADP算法,该算法在策略评估步骤中考虑了近似误差。然后,我们推导了迭代值梯度的一个充分条件,以保证平衡点的闭环稳定性以及收敛到最优值。还提供了一种基于离策略强化学习(RL)技术的无模型扩展。最后,数值结果说明了所提出框架的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验