LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Inexact first-order primal–dual algorithms

Photo by djuls from unsplash

We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study… Click to show full abstract

We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study several types of errors and show that, provided a sufficient decay of these errors, the same convergence rates as for the error-free algorithm can be established. More precisely, we prove the (optimal) $$O\left( 1/N\right)$$ O 1 / N convergence to a saddle point in finite dimensions for the class of non-smooth problems considered in this paper, and prove a $$O\left( 1/N^2\right)$$ O 1 / N 2 or even linear $$O\left( \theta ^N\right)$$ O θ N convergence rate if either the primal or dual objective respectively both are strongly convex. Moreover we show that also under a slower decay of errors we can establish rates, however slower and directly depending on the decay of the errors. We demonstrate the performance and practical use of the algorithms on the example of nested algorithms and show how they can be used to split the global objective more efficiently.

Keywords: first order; dual algorithms; order primal; primal dual; convergence

Journal Title: Computational Optimization and Applications
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.