Suppr超能文献

一种用于在线学习的改进型N步值梯度学习自适应动态规划算法

An Improved N-Step Value Gradient Learning Adaptive Dynamic Programming Algorithm for Online Learning.

作者信息

Al-Dabooni Seaar, Wunsch Donald C

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Apr;31(4):1155-1169. doi: 10.1109/TNNLS.2019.2919338. Epub 2019 Jun 20.

Abstract

In problems with complex dynamics and challenging state spaces, the dual heuristic programming (DHP) algorithm has been shown theoretically and experimentally to perform well. This was recently extended by an approach called value gradient learning (VGL). VGL was inspired by a version of temporal difference (TD) learning that uses eligibility traces. The eligibility traces create an exponential decay of older observations with a decay parameter ( λ ). This approach is known as TD( λ ), and its DHP extension is known as VGL( λ ), where VGL(0) is identical to DHP. VGL has presented convergence and other desirable properties, but it is primarily useful for batch learning. Online learning requires an eligibility-trace-work-space matrix, which is not required for the batch learning version of VGL. Since online learning is desirable for many applications, it is important to remove this computational and memory impediment. This paper introduces a dual-critic version of VGL, called N -step VGL (NSVGL), that does not need the eligibility-trace-work-space matrix, thereby allowing online learning. Furthermore, this combination of critic networks allows an NSVGL algorithm to learn faster. The first critic is similar to DHP, which is adapted based on TD(0) learning, while the second critic is adapted based on a gradient of n -step TD( λ ) learning. Both networks are combined to train an actor network. The combination of feedback signals from both critic networks provides an optimal decision faster than traditional adaptive dynamic programming (ADP) via mixing current information and event history. Convergence proofs are provided. Gradients of one- and n -step value functions are monotonically nondecreasing and converge to the optimum. Two simulation case studies are presented for NSVGL to show their superior performance.

摘要

在具有复杂动态特性和具有挑战性的状态空间的问题中,对偶启发式规划(DHP)算法在理论和实验上都已证明表现良好。最近,一种称为值梯度学习(VGL)的方法对其进行了扩展。VGL的灵感来自于一种使用资格迹的时间差分(TD)学习版本。资格迹通过一个衰减参数(λ)对旧观测值产生指数衰减。这种方法被称为TD(λ),其DHP扩展被称为VGL(λ),其中VGL(0)与DHP相同。VGL已经呈现出收敛性和其他理想特性,但它主要适用于批量学习。在线学习需要一个资格迹工作空间矩阵,而VGL的批量学习版本则不需要。由于在线学习在许多应用中是可取的,因此消除这种计算和内存障碍很重要。本文介绍了一种VGL的双评论家版本,称为N步VGL(NSVGL),它不需要资格迹工作空间矩阵,从而允许在线学习。此外,评论家网络的这种组合使NSVGL算法能够更快地学习。第一个评论家类似于DHP,它基于TD(0)学习进行调整,而第二个评论家基于n步TD(λ)学习梯度进行调整。两个网络结合起来训练一个动作网络。来自两个评论家网络的反馈信号的组合通过混合当前信息和事件历史,比传统自适应动态规划(ADP)更快地提供最优决策。提供了收敛证明。一步和n步值函数的梯度单调非递减并收敛到最优值。给出了两个NSVGL的仿真案例研究,以展示它们的优越性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验