Department of A.I. Software Engineering, Seoul Media Institute of Technology, Seoul 07590, Republic of Korea.
Sensors (Basel). 2022 Dec 14;22(24):9811. doi: 10.3390/s22249811.
Recently, there has been a growing interest in the consensus of a multi-agent system (MAS) with advances in artificial intelligence and distributed computing. Sliding mode control (SMC) is a well-known method that provides robust control in the presence of uncertainties. While our previous study introduced SMC to the reinforcement learning (RL) based on approximate dynamic programming in the context of optimal control, SMC is introduced to a conventional RL framework in this work. As a specific realization, the modified twin delayed deep deterministic policy gradient (DDPG) for consensus was exploited to develop sliding mode RL. Numerical experiments show that the sliding mode RL outperforms existing state-of-the-art RL methods and model-based methods in terms of the mean square error (MSE) performance.
最近,随着人工智能和分布式计算的发展,多智能体系统(MAS)的一致性引起了越来越多的关注。滑模控制(SMC)是一种在存在不确定性时提供鲁棒控制的知名方法。虽然我们之前的研究在最优控制的背景下将 SMC 引入了基于近似动态规划的强化学习(RL)中,但在这项工作中,SMC 被引入了传统的 RL 框架。作为一种具体的实现方式,利用改进的双时滞深度确定性策略梯度(DDPG)进行一致性来开发滑模 RL。数值实验表明,滑模 RL 在均方误差(MSE)性能方面优于现有的最先进的 RL 方法和基于模型的方法。