A standard procedure for evaluating the performance of classification algorithms is k-fold cross validation. Since the training sets for any pair of iterations in k-fold cross validation are overlapping when… Click to show full abstract
A standard procedure for evaluating the performance of classification algorithms is k-fold cross validation. Since the training sets for any pair of iterations in k-fold cross validation are overlapping when the number of folds is larger than two, the resulting accuracy estimates are considered to be dependent. In this paper, the overlapping of training sets is shown to be irrelevant in determining whether two fold accuracies are dependent or not. Then a statistical method is proposed to test the appropriateness of assuming independence for the accuracy estimates in k-fold cross validation. This method is applied on 20 data sets, and the experimental results suggest that it is generally appropriate to assume that the fold accuracies are independent. The cross validation of non-overlapping training sets can make fold accuracies to be dependent. However, this dependence almost has no impact on estimating the sample variance of fold accuracies, and hence they can generally be assumed to be independent.
               
Click one of the above tabs to view related content.