An adaptation of the oscars algorithm for bound constrained global optimization is presented, and numerically tested. The algorithm is a stochastic direct search method, and has low overheads which are… Click to show full abstract
An adaptation of the oscars algorithm for bound constrained global optimization is presented, and numerically tested. The algorithm is a stochastic direct search method, and has low overheads which are constant per sample point. Some sample points are drawn randomly in the feasible region from time to time, ensuring global convergence almost surely under mild conditions. Additional sample points are preferentially placed near previous good sample points to improve the rate of convergence. Connections with partitioning strategies are explored for oscars and the new method, showing these methods have a reduced risk of sample point redundancy. Numerical testing shows that the method is viable in practice, and is substantially faster than oscars in 4 or more dimensions. Comparison with other methods shows good performance in moderately high dimensions. A power law test for identifying and avoiding proper local minima is presented and shown to give modest improvement.
               
Click one of the above tabs to view related content.