In this article, a novel hybrid reinforcement Q -learning control method is proposed to solve the adaptive fuzzy H∞ control problem of discrete-time nonlinear Markov jump systems based on the… Click to show full abstract
In this article, a novel hybrid reinforcement Q -learning control method is proposed to solve the adaptive fuzzy H∞ control problem of discrete-time nonlinear Markov jump systems based on the Takagi-Sugeno fuzzy model. First, the core problem of adaptive fuzzy H∞ control is converted to solving fuzzy game coupled algebraic Riccati equation, which can hardly be solved by mathematical methods directly. To solve this problem, an offline parallel hybrid learning algorithm is first designed, where system dynamics should be known as a prior. Furthermore, an online parallel Q -learning hybrid learning algorithm is developed. The main characteristics of the proposed online hybrid learning algorithms are threefold: 1) system dynamics are avoided during the learning process; 2) compared with the policy iteration method, the restriction of the initial stable control policy is removed; and 3) compared with the value iteration method, a faster convergence rate can be obtained. Finally, we provide a tunnel diode circuit system model to validate the effectiveness of the present learning algorithm.
               
Click one of the above tabs to view related content.