In this manuscript, we identify and evaluate some of the most used optimization models for rule extraction using genetic programming-based algorithms. Six different models, which combine the most common fitness… Click to show full abstract
In this manuscript, we identify and evaluate some of the most used optimization models for rule extraction using genetic programming-based algorithms. Six different models, which combine the most common fitness functions, were tested. These functions employ well-known metrics such as support, confidence, sensitivity, specificity, and accuracy. The models were then applied in the assessment of the performance of a single algorithm in several real classification problems. Results were compared using two different criteria: accuracy and sensitivity/specificity. This comparison, which was supported by statistical analysis, pointed out that the use of the product of sensitivity and specificity provides a more realistic estimation of classifier performance. It was also shown that the accuracy metric can make the classifier biased, especially in unbalanced databases.
               
Click one of the above tabs to view related content.