LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Convergence Rates of Distributed Two-Time-Scale Gradient Methods under Random Quantization

Photo from wikipedia

Abstract Motivated by broad applications within engineering and sciences, we study distributed consensus-based gradient methods for solving optimization problems over a network of nodes. A fundamental challenge for solving this… Click to show full abstract

Abstract Motivated by broad applications within engineering and sciences, we study distributed consensus-based gradient methods for solving optimization problems over a network of nodes. A fundamental challenge for solving this problem is the impact of finite communication bandwidth, so information that is exchanged between the nodes must be quantized. In this paper, we utilize the dithered (random) quantization and study the distributed variant of the well-known two-time-scale methods for solving the underlying optimization problems under the constraint of finite bandwidths. In addition, we provide more insight and an explicit formula of how to design the step sizes of these two-time-scale methods and their impacts on the performance of the algorithms.

Keywords: random quantization; time scale; gradient methods; two time

Journal Title: IFAC-PapersOnLine
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.