Rizvi Syed Ali Asad, Lin Zongli
IEEE Trans Neural Netw Learn Syst. 2019 May;30(5):1523-1536. doi: 10.1109/TNNLS.2018.2870075. Epub 2018 Oct 8.
Approximate dynamic programming (ADP) and reinforcement learning (RL) have emerged as important tools in the design of optimal and adaptive control systems. Most of the existing RL and ADP methods make use of full-state feedback, a requirement that is often difficult to satisfy in practical applications. As a result, output feedback methods are more desirable as they relax this requirement. In this paper, we present a new output feedback-based Q-learning approach to solving the linear quadratic regulation (LQR) control problem for discrete-time systems. The proposed scheme is completely online in nature and works without requiring the system dynamics information. More specifically, a new representation of the LQR Q-function is developed in terms of the input-output data. Based on this new Q-function representation, output feedback LQR controllers are designed. We present two output feedback iterative Q-learning algorithms based on the policy iteration and the value iteration methods. This scheme has the advantage that it does not incur any excitation noise bias, and therefore, the need of using discounted cost functions is circumvented, which in turn ensures closed-loop stability. It is shown that the proposed algorithms converge to the solution of the LQR Riccati equation. A comprehensive simulation study is carried out, which illustrates the proposed scheme.
近似动态规划(ADP)和强化学习(RL)已成为设计最优和自适应控制系统的重要工具。现有的大多数RL和ADP方法都采用全状态反馈,而这一要求在实际应用中往往难以满足。因此,输出反馈方法更具优势,因为它们放宽了这一要求。在本文中,我们提出了一种基于输出反馈的新型Q学习方法,用于解决离散时间系统的线性二次调节(LQR)控制问题。所提出的方案本质上是完全在线的,并且在无需系统动态信息的情况下即可工作。更具体地说,根据输入输出数据开发了一种新的LQR Q函数表示形式。基于这种新的Q函数表示形式,设计了输出反馈LQR控制器。我们提出了两种基于策略迭代和值迭代方法的输出反馈迭代Q学习算法。该方案的优点是不会产生任何激励噪声偏差,因此无需使用折扣成本函数,进而确保了闭环稳定性。结果表明,所提出的算法收敛于LQR Riccati方程的解。进行了全面的仿真研究,验证了所提出的方案。