LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fenchel Dual Gradient Methods for Distributed Convex Optimization Over Time-Varying Networks

Photo by codioful from unsplash

We develop a family of Fenchel dual gradient methods for solving constrained, strongly convex, but not necessarily smooth multi-agent optimization problems over time-varying networks. The proposed algorithms are constructed on… Click to show full abstract

We develop a family of Fenchel dual gradient methods for solving constrained, strongly convex, but not necessarily smooth multi-agent optimization problems over time-varying networks. The proposed algorithms are constructed on the basis of weighted Fenchel dual gradients and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality at sublinear rates under a standard connectivity condition. Compared with the existing distributed optimization methods that also have convergence rate guarantees over time-varying networks, our algorithms are able to address constrained problems and have better scalability with respect to network size and time for reaching connectivity. The competent performance of the Fenchel dual gradient methods is demonstrated via simulations.

Keywords: time; dual gradient; fenchel dual; gradient methods; varying networks; time varying

Journal Title: IEEE Transactions on Automatic Control
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.