LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Delaunay-based derivative-free optimization via global surrogates. Part III: nonconvex constraints

Photo by miguelherc96 from unsplash

This paper introduces a Delaunay-based derivative-free optimization algorithm, dubbed $$\varDelta $$ Δ -DOGS( $$\varOmega $$ Ω ), for problems with both (a) a nonconvex, computationally expensive objective function f (… Click to show full abstract

This paper introduces a Delaunay-based derivative-free optimization algorithm, dubbed $$\varDelta $$ Δ -DOGS( $$\varOmega $$ Ω ), for problems with both (a) a nonconvex, computationally expensive objective function f ( x ), and (b) nonlinear, computationally expensive constraint functions $$c_\ell (x)$$ c ℓ ( x ) which, taken together, define a nonconvex, possibly even disconnected feasible domain $$\varOmega $$ Ω , which is assumed to lie within a known rectangular search domain $$\varOmega _s$$ Ω s , everywhere within which the f ( x ) and $$c_\ell (x)$$ c ℓ ( x ) may be evaluated. Approximations of both the objective function f ( x ) as well as the feasible domain $$\varOmega $$ Ω are developed and refined as the iterations proceed. The approach is practically limited to the problems with less than about ten adjustable parameters. The work is an extension of our original Delaunay-based optimization algorithm (see JOGO DOI: 10.1007/s10898-015-0384-2), and inherits many of the constructions and strengths of that algorithm, including: (1) a surrogate function p ( x ) interpolating all existing function evaluations and summarizing their trends, (2) a synthetic, piecewise-quadratic uncertainty function e ( x ) built on the framework of a Delaunay triangulation amongst existing datapoints, (3) a tunable balance between global exploration (large K ) and local refinement (small K ), (4) provable global convergence for a sufficiently large K , under the assumption that the objective and constraint functions are twice differentiable with bounded Hessians, (5) an Adaptive- K variant of the algorithm that efficiently tunes K automatically based on a target value of the objective function, and (6) remarkably fast global convergence on a variety of benchmark problems.

Keywords: derivative free; delaunay based; based derivative; optimization; function; free optimization

Journal Title: Journal of Global Optimization
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.