Recently, there has been significant progress in the development of distributed first-order methods. In particular, Shi et al. (2015) on the one hand and Qu and Li (2017) and Nedic et al.… Click to show full abstract
Recently, there has been significant progress in the development of distributed first-order methods. In particular, Shi et al. (2015) on the one hand and Qu and Li (2017) and Nedic et al. (2016) on the other hand propose two different types of methods that are designed from very different perspectives. They achieve both exact and linear convergence when a constant step size is used—a favorable feature that was not achievable by most prior methods. In this paper, we unify, generalize, and improve convergence speed of the methods by Shi et al. (2015), Qu and Li (2017), and Nedic et al. (2016), when the underlying network is static and undirected. We first carry out a unifying primal-dual analysis that sheds light on how these methods compare. The analysis reveals that a major difference between the methods is on how the primal error affects the dual error along iterations. We, then, capitalize on the insights from the analysis to derive a novel method that can reduce the negative effect of the primal error on the dual error. We establish for the proposed generalized method global R-linear convergence rate under strongly convex costs with Lipschitz continuous gradients.
               
Click one of the above tabs to view related content.