This article investigates the leader-follower formation learning control (FLC) problem for discrete-time strict-feedback multiagent systems (MASs). The objective is to acquire the experience knowledge from the stable leader-follower adaptive formation… Click to show full abstract
This article investigates the leader-follower formation learning control (FLC) problem for discrete-time strict-feedback multiagent systems (MASs). The objective is to acquire the experience knowledge from the stable leader-follower adaptive formation control process and improve the control performance by reusing the experiential knowledge. First, a two-layer control scheme is proposed to solve the leader-follower formation control problem. In the first layer, by combining adaptive distributed observers and constructed in -step predictors, the leader's future state is predicted by the followers in a distributed manner. In the second layer, the adaptive neural network (NN) controllers are constructed for the followers to ensure that all the followers track the predicted output of the leader. In the stable formation control process, the NN weights are verified to exponentially converge to their optimal values by developing an extended stability corollary of linear time-varying (LTV) system. Second, by constructing some specific ``learning rules,'' the NN weights with convergent sequences are synthetically acquired and stored in the followers as experience knowledge. Then, the stored knowledge is reused to construct the FLC. The proposed FLC method not only solves the leader-follower formation problem but also improves the transient control performance. Finally, the validity of the presented FLC scheme is illustrated by simulations.
               
Click one of the above tabs to view related content.