LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Escaping Saddle Points for Successive Convex Approximation

Photo by yanots from unsplash

Optimizing non-convex functions is of primary importance in modern pattern recognition because it underlies the training of deep networks and nonlinear dimensionality reduction. First-order algorithms under suitable randomized perturbations or… Click to show full abstract

Optimizing non-convex functions is of primary importance in modern pattern recognition because it underlies the training of deep networks and nonlinear dimensionality reduction. First-order algorithms under suitable randomized perturbations or step-size rules have been shown to be effective for such settings as their limit points can be guaranteed to be local extrema rather than saddle points. However, it is well-known that the practical convergence of first-order methods is slower than those which exploit additional structure. In particular, empirically, successive convex approximation (SCA) converges faster than first-order methods. However, to date, SCA in general non-convex settings converges to first-order stationary points, which could either be local extrema or saddle points whose performance is typically inferior. To mitigate this issue, we propose calibrated randomized perturbations of SCA, which exhibit the improved convergence rate as compared to the gradient descent counter part. In particular, our main technical contributions are to establish the non-asymptotic performance of SCA algorithm and its perturbed variant converges to an approximate second-order stationary point. Experiments on multi-dimensional scaling, a machine learning problem whose training objective is non-convex, substantiate the performance gains associated with employing random perturbations.

Keywords: convex approximation; order; successive convex; first order; saddle points

Journal Title: IEEE Transactions on Signal Processing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.