In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm… Click to show full abstract
In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm that updates in parallel at each iteration at least two random components of the solution, chosen according to a given probability distribution. Those computations can be performed in a distributed fashion according to the structure of the network. Complexity per iteration of the proposed methods is usually cheaper than that of the full gradient method when the number of nodes in the network is much larger than the number of updated components. On smooth convex problems, we prove that these methods exhibit a sublinear worst-case convergence rate in the expected value of the objective function. Moreover, this convergence rate depends linearly on the number of components to be updated. On smooth strongly convex problems we prove that our methods converge linearly. We also focus on how to choose the probabilities to make our randomized algorithms converge as fast as possible, which leads us to solving a sparse semidefinite program. We then describe several applications that fit in our framework, in particular the convex feasibility problem. Finally, numerical experiments illustrate the behaviour of our methods, showing in particular that updating more than two components in parallel accelerates the method.
               
Click one of the above tabs to view related content.