This paper studies the projected saddle-point dynamics associated to a convex–concave function, which we term saddle function. The dynamics consists of gradient descent of the saddle function in variables corresponding… Click to show full abstract
This paper studies the projected saddle-point dynamics associated to a convex–concave function, which we term saddle function. The dynamics consists of gradient descent of the saddle function in variables corresponding to convexity and (projected) gradient ascent in variables corresponding to concavity. We examine the role that the local and/or global nature of the convexity–concavity properties of the saddle function plays in guaranteeing convergence and robustness of the dynamics. Under the assumption that the saddle function is twice continuously differentiable, we provide a novel characterization of the omega-limit set of the trajectories of this dynamics in terms of the diagonal blocks of the Hessian. Using this characterization, we establish global asymptotic convergence of the dynamics under local strong convexity–concavity of the saddle function. When strong convexity–concavity holds globally, we establish three results. First, we identify a Lyapunov function (that decreases strictly along the trajectory) for the projected saddle-point dynamics when the saddle function corresponds to the Lagrangian of a general constrained convex optimization problem. Second, for the particular case when the saddle function is the Lagrangian of an equality-constrained optimization problem, we show input-to-state stability (ISS) of the saddle-point dynamics by providing an ISS Lyapunov function. Third, we use the latter result to design an opportunistic state-triggered implementation of the dynamics. Various examples illustrate our results.
               
Click one of the above tabs to view related content.