The covariance matrix adaptation evolution strategy (CMA-ES) is one of the state-of-the-art evolutionary algorithms for optimization problems with continuous representation. It has been extensively applied to single-objective optimization problems, and… Click to show full abstract
The covariance matrix adaptation evolution strategy (CMA-ES) is one of the state-of-the-art evolutionary algorithms for optimization problems with continuous representation. It has been extensively applied to single-objective optimization problems, and different variants of CMA-ES have also been proposed for multi-objective optimization problems (MOPs). When applied to MOPs, the traditional steps of CMA-ES have to be modified to accommodate for multiple objectives. This fact is particularly evident when the number of objectives is higher than 3 and, with a high probability, all the solutions produced become non-dominated. An open question is to what extent information about the objective values of the non-dominated solutions can be injected in the CMA-ES model for a more effective search. In this paper, we investigate this general question using several metrics that describe the quality of the solutions already evaluated, different transfer weight functions, and a set of difficult benchmark instances including many-objective problems. We introduce a number of new strategies that modify how the probabilistic model is learned in CMA-ES. By conducting an exhaustive empirical analysis on two difficult benchmarks of many-objective functions we show that the proposed strategies to infuse information about the quality indicators into the learned models can achieve consistent improvements in the quality of the Pareto fronts obtained and enhance the convergence rate of the algorithm. Moreover, we conducted a comparison with a state-of-the-art algorithm from the literature, and achieved competitive results in problems with irregular Pareto fronts.
               
Click one of the above tabs to view related content.