This paper deals with a new weight-updating algorithm using Lyapunov stability theory (LST) for the training of a neural emulator (NE), of nonlinear systems, connected by an autonomous algorithm inspired… Click to show full abstract
This paper deals with a new weight-updating algorithm using Lyapunov stability theory (LST) for the training of a neural emulator (NE), of nonlinear systems, connected by an autonomous algorithm inspired from the real-time recurrent learning (RTRL). The proposed method is formulated by an inequality-constraint optimization problem where the Lagrange multiplier theory is used as the optimization tool. The contribution of this paper is the integration of the LST into the Lagrange constraint function to synthesize a new analytical adaptation gain rate satisfying the asymptotic stability of the NE and providing good emulation performances. To confirm the good performances and the convergence ability of the proposed adaptation algorithm, a numerical example and an experimental validation on a chemical reactor are proposed.
               
Click one of the above tabs to view related content.