Suppr超能文献

最优学习输出跟踪控制:一种具有收敛性分析的无模型策略优化方法。

Optimal Learning Output Tracking Control: A Model-Free Policy Optimization Method With Convergence Analysis.

作者信息

Lin Mingduo, Zhao Bo, Liu Derong

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5574-5585. doi: 10.1109/TNNLS.2024.3379207. Epub 2025 Feb 28.

Abstract

Optimal learning output tracking control (OLOTC) in a model-free manner has received increasing attention in both the intelligent control and the reinforcement learning (RL) communities. Although the model-free tracking control has been achieved via off-policy learning and Q-learning, another popular RL idea of direct policy learning, with its easy-to-implement feature, is still rarely considered. To fill this gap, this article aims to develop a novel model-free policy optimization (PO) algorithm to achieve the OLOTC for unknown linear discrete-time (DT) systems. The iterative control policy is parameterized to directly improve the discounted value function of the augmented system via the gradient-based method. To implement this algorithm in a model-free manner, a model-free two-point policy gradient (PG) algorithm is designed to approximate the gradient of discounted value function by virtue of the sampled states and the reference trajectories. The global convergence of model-free PO algorithm to the optimal value function is demonstrated with the sufficient quantity of samples and proper conditions. Finally, numerical simulation results are provided to validate the effectiveness of the present method.

摘要

无模型方式的最优学习输出跟踪控制(OLOTC)在智能控制和强化学习(RL)领域都受到了越来越多的关注。尽管通过离策略学习和Q学习已经实现了无模型跟踪控制,但另一种流行的直接策略学习的RL思想,因其易于实现的特点,仍然很少被考虑。为了填补这一空白,本文旨在开发一种新颖的无模型策略优化(PO)算法,以实现未知线性离散时间(DT)系统的OLOTC。迭代控制策略通过基于梯度的方法进行参数化,以直接改善增强系统的折扣价值函数。为了以无模型方式实现该算法,设计了一种无模型两点策略梯度(PG)算法,借助采样状态和参考轨迹来近似折扣价值函数的梯度。在有足够数量的样本和适当条件下,证明了无模型PO算法对最优价值函数的全局收敛性。最后,提供了数值模拟结果以验证本方法的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验