• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有近似误差的哈密顿驱动自适应动态规划

Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors.

作者信息

Yang Yongliang, Modares Hamidreza, Vamvoudakis Kyriakos G, He Wei, Xu Cheng-Zhong, Wunsch Donald C

出版信息

IEEE Trans Cybern. 2022 Dec;52(12):13762-13773. doi: 10.1109/TCYB.2021.3108034. Epub 2022 Nov 18.

DOI:10.1109/TCYB.2021.3108034
PMID:34495864
Abstract

In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton-Jacobi-Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, "min-Hamiltonian," is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.

摘要

在本文中,我们考虑哈密顿驱动框架内的一种迭代自适应动态规划(ADP)算法,以求解非线性系统连续时间无限时域最优控制问题的哈密顿 - 雅可比 - 贝尔曼(HJB)方程。首先,定义了一个新颖的函数“最小哈密顿量”来捕捉经典哈密顿量的基本性质。结果表明,在哈密顿驱动框架内,HJB方程和策略迭代(PI)算法都可以根据最小哈密顿量来表述。此外,我们开发了一种迭代ADP算法,该算法在策略评估步骤中考虑了近似误差。然后,我们推导了迭代值梯度的一个充分条件,以保证平衡点的闭环稳定性以及收敛到最优值。还提供了一种基于离策略强化学习(RL)技术的无模型扩展。最后,数值结果说明了所提出框架的有效性。

相似文献

1
Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors.具有近似误差的哈密顿驱动自适应动态规划
IEEE Trans Cybern. 2022 Dec;52(12):13762-13773. doi: 10.1109/TCYB.2021.3108034. Epub 2022 Nov 18.
2
Hamiltonian-Driven Adaptive Dynamic Programming With Efficient Experience Replay.具有高效经验回放的哈密顿驱动自适应动态规划
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3278-3290. doi: 10.1109/TNNLS.2022.3213566. Epub 2024 Feb 29.
3
Policy-Iteration-Based Finite-Horizon Approximate Dynamic Programming for Continuous-Time Nonlinear Optimal Control.基于策略迭代的连续时间非线性最优控制有限时域近似动态规划
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5255-5267. doi: 10.1109/TNNLS.2022.3225090. Epub 2023 Sep 1.
4
Continuous-Time Time-Varying Policy Iteration.连续时间时变策略迭代
IEEE Trans Cybern. 2020 Dec;50(12):4958-4971. doi: 10.1109/TCYB.2019.2926631. Epub 2020 Dec 3.
5
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.基于有限逼近误差的离散时间迭代自适应动态规划。
IEEE Trans Cybern. 2014 Dec;44(12):2820-33. doi: 10.1109/TCYB.2014.2354377. Epub 2014 Sep 26.
6
Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.策略迭代自适应动态规划算法用于离散时间非线性系统。
IEEE Trans Neural Netw Learn Syst. 2014 Mar;25(3):621-34. doi: 10.1109/TNNLS.2013.2281663.
7
A Parallel Framework of Adaptive Dynamic Programming Algorithm With Off-Policy Learning.一种带有离策略学习的自适应动态规划算法的并行框架。
IEEE Trans Neural Netw Learn Syst. 2021 Aug;32(8):3578-3587. doi: 10.1109/TNNLS.2020.3015767. Epub 2021 Aug 3.
8
Dual Heuristic Programming for Optimal Control of Continuous-Time Nonlinear Systems Using Single Echo State Network.基于单回声状态网络的连续时间非线性系统最优控制的对偶启发式规划
IEEE Trans Cybern. 2022 Mar;52(3):1701-1712. doi: 10.1109/TCYB.2020.2984952. Epub 2022 Mar 11.
9
Data-Driven Dynamic Multiobjective Optimal Control: An Aspiration-Satisfying Reinforcement Learning Approach.
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6183-6193. doi: 10.1109/TNNLS.2021.3072571. Epub 2022 Oct 27.
10
Adaptive Interleaved Reinforcement Learning: Robust Stability of Affine Nonlinear Systems With Unknown Uncertainty.自适应交错强化学习:具有未知不确定性的仿射非线性系统的鲁棒稳定性
IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):270-280. doi: 10.1109/TNNLS.2020.3027653. Epub 2022 Jan 5.