In this paper, we develop a novel event-triggered robust control strategy for continuous-time nonlinear systems with unmatched uncertainties. First, we build a relationship to show that the event-triggered robust control… Click to show full abstract
In this paper, we develop a novel event-triggered robust control strategy for continuous-time nonlinear systems with unmatched uncertainties. First, we build a relationship to show that the event-triggered robust control can be obtained by solving an event-triggered nonlinear optimal control problem of the auxiliary system. Then, within the framework of reinforcement learning, we propose an adaptive critic approach to solve the event-triggered nonlinear optimal control problem. Unlike typical actor-critic dual approximators used in reinforcement learning, we employ a unique critic approximator to derive the solution of the event-triggered Hamilton-Jacobi-Bellman equation arising in the nonlinear optimal control problem. The critic approximator is updated via the gradient descent method, and the persistence of excitation condition is necessary. Meanwhile, under a newly proposed event-triggering condition, we prove that the developed critic approximator update rule guarantees all signals in the auxiliary closed-loop system to be uniformly ultimately bounded. Moreover, we demonstrate that the obtained event-triggered optimal control can ensure the original system to be stable in the sense of uniform ultimate boundedness. Finally, a F-16 aircraft plant and a nonlinear system are provided to validate the present event-triggered robust control scheme.
               
Click one of the above tabs to view related content.