In this letter, we introduce a distributed Nesterov gradient method, $\mathcal {ABN}$, that does not require doubly stochastic weights. Instead, the implementation is based on a simultaneous application of both row-… Click to show full abstract
In this letter, we introduce a distributed Nesterov gradient method, $\mathcal {ABN}$, that does not require doubly stochastic weights. Instead, the implementation is based on a simultaneous application of both row- and column-stochastic weights that makes $\mathcal {ABN}$ applicable to arbitrary (strongly-connected) graphs. Since constructing column-stochastic weights needs additional information (the number of outgoing neighbors), not available in certain communication protocols, we derive a variation, FROZEN, that only requires row-stochastic weights, but at the expense of additional iterations for eigenvector estimation. We numerically study these algorithms for various objective functions and network parameters and show that the proposed distributed Nesterov gradient methods achieve acceleration compared to the current state-of-the-art methods for distributed optimization.
Share on Social Media:
  
        
        
        
Sign Up to like & get recommendations! 0
Related content
More Information
            
News
            
Social Media
            
Video
            
Recommended
               
Click one of the above tabs to view related content.