LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Convergence rates analysis of a multiobjective proximal gradient method

Photo from wikipedia

In 2019, Tanabe et al. proposed proximal gradient methods for multiobjective optimization. Under reasonable assumptions, they proved that the accumulation points of the sequences generated by these methods are Pareto… Click to show full abstract

In 2019, Tanabe et al. proposed proximal gradient methods for multiobjective optimization. Under reasonable assumptions, they proved that the accumulation points of the sequences generated by these methods are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for multiobjective proximal gradient methods, matching what is known in scalar optimization. More specifically, by using merit functions as a way to measure the complexity, we prove the rates for non-convex problems ($O(\sqrt{1 / k})$), convex ones ($O(1 / k)$), and strongly convex ones ($O(r^k)$ for some~$r \in (0, 1)$). We also extend the so-called Polyak-{\L}ojasiewicz (PL) inequality for multiobjective optimization and establish the rate for multiobjective problems that satisfy such inequalities ($O(r^k)$ for some~$r \in (0, 1)$).

Keywords: optimization; convergence rates; proximal gradient; multiobjective proximal

Journal Title: Optimization Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.