Abstract Training an Artificial Neural Network (ANN) algorithm is not trivial, which requires optimizing a set of weights and biases that increase dramatically with the increasing capacity of the neural… Click to show full abstract
Abstract Training an Artificial Neural Network (ANN) algorithm is not trivial, which requires optimizing a set of weights and biases that increase dramatically with the increasing capacity of the neural network resulting in such hard optimization problems. Essentially, over recent decades, stochastic search algorithms have shown remarkable abilities for addressing hard optimization problems. On the other hand, pragmatically, abundant real-world problems suffer from the imbalance problem, where the distribution of data varies considerably among classes resulting in more training biases and variances which degrades the performance of the learning algorithm. This paper introduces three stochastic and metaheuristic algorithms for training the Multilayer Perceptron (MLP) neural network to solve the problem of imbalanced classifications. The utilized algorithms are the Grey Wolf Optimization (GWO), Particle Swarm Optimization (PSO), and the Salp Swarm Algorithm (SSA). The proposed GWO-MLP, PSO-MLP, and SSA-MLP are trained based on different objective functions; accuracy, f1-score, and g-mean. Whereas, it is evaluated based on 10 benchmark imbalanced datasets. The results show an advantage for f1-score, and g-mean fitness functions over the accuracy when the datasets are imbalanced.
               
Click one of the above tabs to view related content.