LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Finding nash equilibrium for imperfect information games via fictitious play based on local regret minimization

Photo by alterego_swiss from unsplash

Finding Nash equilibrium in the domain of imperfect information games as a challenging problem has received much attention. Neural Fictitious Self‐Play (NFSP) is a popular model‐free machine learning algorithm and… Click to show full abstract

Finding Nash equilibrium in the domain of imperfect information games as a challenging problem has received much attention. Neural Fictitious Self‐Play (NFSP) is a popular model‐free machine learning algorithm and has computed approximate Nash equilibrium on such games. However, the deep reinforcement learning method used to approximate the best response in NFSP requires reaching a fully observable Markov state, while the states in imperfect information games are partially observable and non‐Markovian, which results in a poor approximation of the best response. Thus, NFSP needs more iterations to converge. In this study, we present a new reinforcement learning method that is inspired by counterfactual regret minimization to relax the Markov requirement by iteratively updating policy according to the regret matching process. Combining this new reinforcement learning algorithm with fictitious play, we further present a novel algorithm to find approximate Nash equilibrium in zero‐sum imperfect information games. Experimental results in three benchmark games show that this new algorithm can find approximate Nash equilibrium effectively and converge much faster compared with baseline.

Keywords: equilibrium; information games; nash equilibrium; imperfect information; finding nash

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.