LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Dependency Analysis of Accuracy Estimates in k-Fold Cross Validation

Photo from wikipedia

A standard procedure for evaluating the performance of classification algorithms is k-fold cross validation. Since the training sets for any pair of iterations in k-fold cross validation are overlapping when… Click to show full abstract

A standard procedure for evaluating the performance of classification algorithms is k-fold cross validation. Since the training sets for any pair of iterations in k-fold cross validation are overlapping when the number of folds is larger than two, the resulting accuracy estimates are considered to be dependent. In this paper, the overlapping of training sets is shown to be irrelevant in determining whether two fold accuracies are dependent or not. Then a statistical method is proposed to test the appropriateness of assuming independence for the accuracy estimates in k-fold cross validation. This method is applied on 20 data sets, and the experimental results suggest that it is generally appropriate to assume that the fold accuracies are independent. The cross validation of non-overlapping training sets can make fold accuracies to be dependent. However, this dependence almost has no impact on estimating the sample variance of fold accuracies, and hence they can generally be assumed to be independent.

Keywords: cross validation; fold cross; accuracy estimates; validation

Journal Title: IEEE Transactions on Knowledge and Data Engineering
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.