Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate… Click to show full abstract
Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate of convergence for the algorithm. For Mean Field Games, convergence of the policy iteration method has been recently proved in [9]. Here, we provide an estimate of its rate of convergence. AMS-Subject Classification: 49N80; 35Q89; 91A16; 65N12.
               
Click one of the above tabs to view related content.