In this article, an augmented game approach is proposed for the formulation and analysis of distributed learning dynamics in multiagent games. Through the design of the augmented game, the coupling… Click to show full abstract
In this article, an augmented game approach is proposed for the formulation and analysis of distributed learning dynamics in multiagent games. Through the design of the augmented game, the coupling structure of utility functions among all the players can be reformulated into an arbitrary undirected connected network while the Nash equilibria are preserved. In this case, any full-information game learning dynamics can be recast into a distributed form, and its convergence can be determined from the structure of the augmented game. We apply the proposed approach to generate both deterministic and stochastic distributed gradient play and obtain several negative convergent results about the distributed gradient play: 1) a Nash equilibrium is convergent under the classic gradient play, yet its corresponding augmented Nash equilibrium may be not convergent under the distributed gradient play and, on the other side, 2) a Nash equilibrium is not convergent under the classic gradient play, yet its corresponding augmented Nash equilibrium may be convergent under the distributed gradient play. In particular, we show that the variational stability structure (including monotonicity as a special case) of a game is not guaranteed to be preserved in its augmented game. These results provide a systematic methodology about how to formulate and then analyze the feasibility of distributed game learning dynamics.
               
Click one of the above tabs to view related content.