Zhang Xiaoguang, Yang Zhou, Liu Haitao, Huang Xin
School of Mechanical Engineering, Guangdong Ocean University, Zhanjiang 524088, China.
Guangdong Engineering Technology Research Center of Ocean Equipment and Manufacturing, Zhanjiang 524088, China.
Sensors (Basel). 2025 Sep 2;25(17):5410. doi: 10.3390/s25175410.
This paper proposes optimal sliding mode fault-tolerant control for multiple robotic manipulators in the presence of external disturbances and actuator faults. First, a quantitative prescribed performance control (QPPC) strategy is constructed, which relaxes the constraints on initial conditions while strictly restricting the trajectory within a preset range. Second, based on QPPC, adaptive gain integral terminal sliding mode control (AGITSMC) is designed to enhance the anti-interference capability of robotic manipulators in complex environments. Third, a critic-only neural network optimal dynamic programming (CNNODP) strategy is proposed to learn the optimal value function and control policy. This strategy fits nonlinearities solely through critic networks and uses residuals and historical samples from reinforcement learning to drive neural network updates, achieving optimal control with lower computational costs. Finally, the boundedness and stability of the system are proven via the Lyapunov stability theorem. Compared with existing sliding mode control methods, the proposed method reduces the maximum position error by up to 25% and the peak control torque by up to 16.5%, effectively improving the dynamic response accuracy and energy efficiency of the system.
本文提出了一种针对存在外部干扰和执行器故障的多机器人机械手的最优滑模容错控制方法。首先,构建了一种定量规定性能控制(QPPC)策略,该策略放宽了对初始条件的约束,同时将轨迹严格限制在预设范围内。其次,基于QPPC,设计了自适应增益积分终端滑模控制(AGITSMC),以增强机器人机械手在复杂环境中的抗干扰能力。第三,提出了一种仅含评判器的神经网络最优动态规划(CNNODP)策略来学习最优值函数和控制策略。该策略仅通过评判器网络拟合非线性,并利用强化学习中的残差和历史样本驱动神经网络更新,以较低的计算成本实现最优控制。最后,通过李雅普诺夫稳定性定理证明了系统的有界性和稳定性。与现有的滑模控制方法相比,所提方法将最大位置误差降低了25%,峰值控制转矩降低了16.5%,有效提高了系统的动态响应精度和能量效率。