Integral reinforcement learning-based guaranteed cost control for unknown nonlinear systems subject to input constraints and uncertainties

0209 industrial biotechnology 0202 electrical engineering, electronic engineering, information engineering 02 engineering and technology
DOI: 10.1016/j.amc.2021.126336 Publication Date: 2021-05-29T05:01:45Z
ABSTRACT
Abstract This paper investigates guaranteed cost control (GCC) problem for nonlinear systems subject to input constraints and disturbances by utilizing the reinforcement-learning (RL) algorithm. Firstly, by establishing a modified Hamilton–Jacobi–Bellman (HJI) equation, which is difficult to be solved, a model-based policy iteration (PI) GCC algorithm is designed for input-constrained nonlinear systems with disturbances. Moreover, without requiring any knowledge of system dynamics, by designing an auxiliary system with a control law and an auxiliary disturbance policy, an online model-free GCC approach is developed by utilizing integral reinforcement learning (IRL) algorithm. To implement the proposed control algorithm, the actor and disturbance NNs are constructed to approximate the optimal control input and worst-case disturbance policy, while the critic NN is utilized to approximate optimal value function. Further, a synchronization weight update law is developed to minimize the NN approximation residual errors. The asymptotic stability of controlled systems is analyzed by applying the Lyapunov’s method. Finally, the effectiveness and feasibility of the proposed control method are verified by two nonlinear simulation examples.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (54)
CITATIONS (12)