Many existing distributed optimization algorithms are applicable to time-varying networks, whereas their convergence results are established under the standard $B$ -connectivity condition. In this letter, we establish the convergence of… Click to show full abstract
Many existing distributed optimization algorithms are applicable to time-varying networks, whereas their convergence results are established under the standard $B$ -connectivity condition. In this letter, we establish the convergence of the Fenchel dual gradient methods, proposed in our prior work, under a less restrictive and indeed minimal connectivity condition on undirected networks, which, referred to as joint connectivity, requires the infinitely occurring agent interactions to form a connected graph. Compared to the existing distributed optimization algorithms that are guaranteed to converge under joint connectivity, the Fenchel dual gradient methods are able to handle nonlinear local cost functions and nonidentical local constraints. We also demonstrate the effectiveness of the Fenchel dual gradient methods over time-varying networks satisfying joint connectivity via simulations.
               
Click one of the above tabs to view related content.