LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach

Photo by sambalye from unsplash

This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as… Click to show full abstract

This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the $(Q,S,R)$ - $\alpha$ -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as $H_{\infty }$ and passivity performances, in a unified framework. With the introduction of a Lyapunov–Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

Keywords: tex math; robust stabilization; neural networks; inline formula; dissipativity

Journal Title: IEEE Transactions on Neural Networks and Learning Systems
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.