In evaluating the performance of software defect prediction models, accuracy measures such as precision and recall are commonly used. However, most of these measures are affected by neg/pos ratio of… Click to show full abstract
In evaluating the performance of software defect prediction models, accuracy measures such as precision and recall are commonly used. However, most of these measures are affected by neg/pos ratio of the data set being predicted, where neg is the number of negative cases (defect-free modules) and pos is the number of positive cases (defective modules). Thus, it is not fair to compare such values across different data sets with different neg/pos ratios and it may even lead to misleading or contradicting conclusions. The objective of this study is to address the class imbalance issue in assessing performance of defect prediction models. The proposed method relies on computation of expected values of accuracy measures based solely on the value of the neg and pos values of the data set. Based on the expected values, we derive the neg/pos-normalized accuracy measures, which are defined as their divergence from the expected value divided by the standard deviation of all possible prediction outcomes. The proposed measures enable us to provide a ranking of predictions across different data sets, which can distinguish between successful predictions and unsuccessful predictions. Our results derived from a case study of defect prediction based on 19 defect data sets indicate that ranking of predictions is significantly different than the ranking of conventional accuracy measures such as precision and recall as well as composite measures F1-value, AUC of ROC, MCC, G-mean and Balance. In addition, we conclude that MCC attains a better defect prediction accuracy than F1-value, AUC of ROC, G-mean and Balance.
               
Click one of the above tabs to view related content.