LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fuzzy H∞ Control of Discrete-Time Nonlinear Markov Jump Systems via a Novel Hybrid Reinforcement Q -Learning Method.

In this article, a novel hybrid reinforcement Q -learning control method is proposed to solve the adaptive fuzzy H∞ control problem of discrete-time nonlinear Markov jump systems based on the… Click to show full abstract

In this article, a novel hybrid reinforcement Q -learning control method is proposed to solve the adaptive fuzzy H∞ control problem of discrete-time nonlinear Markov jump systems based on the Takagi-Sugeno fuzzy model. First, the core problem of adaptive fuzzy H∞ control is converted to solving fuzzy game coupled algebraic Riccati equation, which can hardly be solved by mathematical methods directly. To solve this problem, an offline parallel hybrid learning algorithm is first designed, where system dynamics should be known as a prior. Furthermore, an online parallel Q -learning hybrid learning algorithm is developed. The main characteristics of the proposed online hybrid learning algorithms are threefold: 1) system dynamics are avoided during the learning process; 2) compared with the policy iteration method, the restriction of the initial stable control policy is removed; and 3) compared with the value iteration method, a faster convergence rate can be obtained. Finally, we provide a tunnel diode circuit system model to validate the effectiveness of the present learning algorithm.

Keywords: reinforcement learning; discrete time; novel hybrid; fuzzy control; hybrid reinforcement; time nonlinear

Journal Title: IEEE transactions on cybernetics
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.