Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization… Click to show full abstract
Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization (GWO) is one such algorithm because it is efficient, simple to use, and easy to implement. However, GWO has several drawbacks as it is stuck in local optima, has a low convergence rate, and has poor exploration. Several attempts have been made recently to overcome these drawbacks. This paper discusses some strategies that can be applied to GWO to overcome its drawbacks. This article proposes a novel algorithm to enhance the convergence rate, which was poor in GWO, and it is also compared with the other optimization algorithms. GWO also has the limitation of becoming stuck in local optima when used in complex functions or in a large search space, so these issues are further addressed. The most remarkable factor is that GWO purely depends on the initialization constraints such as population size and wolf initial positions. This study demonstrates the improved position of the wolf by applying strategies with the same population size. As a result, this novel algorithm has enhanced its exploration capability compared to other algorithms presented, and statistical results are also presented to demonstrate its superiority.
               
Click one of the above tabs to view related content.