We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study… Click to show full abstract
We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study several types of errors and show that, provided a sufficient decay of these errors, the same convergence rates as for the error-free algorithm can be established. More precisely, we prove the (optimal) $$O\left( 1/N\right)$$ O 1 / N convergence to a saddle point in finite dimensions for the class of non-smooth problems considered in this paper, and prove a $$O\left( 1/N^2\right)$$ O 1 / N 2 or even linear $$O\left( \theta ^N\right)$$ O θ N convergence rate if either the primal or dual objective respectively both are strongly convex. Moreover we show that also under a slower decay of errors we can establish rates, however slower and directly depending on the decay of the errors. We demonstrate the performance and practical use of the algorithms on the example of nested algorithms and show how they can be used to split the global objective more efficiently.
               
Click one of the above tabs to view related content.