In this paper, we consider the distributed optimization problem, where the goal is to minimize the global objective function formed by a sum of agents’ local smooth and strongly convex… Click to show full abstract
In this paper, we consider the distributed optimization problem, where the goal is to minimize the global objective function formed by a sum of agents’ local smooth and strongly convex objective functions, over undirected connected graphs. Several distributed accelerated algorithms have been proposed for solving such a problem in the existing literature. In this paper, we provide insights for understanding these existing distributed algorithms from an ordinary differential equation (ODE) point of view. More specifically, we first derive an equivalent second-order ODE, which is the exact limit of these existing algorithms by taking the small step-size. Moreover, focusing on the quadratic objective functions, we show that the solution of the resulting ODE exponentially converges to the unique global optimal solution. The theoretical results are validated and illustrated by numerical simulations.
               
Click one of the above tabs to view related content.