Learning deep neural networks from noisy labels is challenging, because high-capacity networks attempt to describe data even with noisy class labels. In this study, we propose a self-augmentation method without… Click to show full abstract
Learning deep neural networks from noisy labels is challenging, because high-capacity networks attempt to describe data even with noisy class labels. In this study, we propose a self-augmentation method without additional parameters, which handles noisy labeled data based on small-loss criteria. To this end, we use small-loss samples by introducing a noise-robust probabilistic model based on a Gaussian mixture model (GMM), in which small-loss samples follow class-conditional Gaussian distributions. With this sample augmentation using the GMM-based probabilistic model, we can effectively solve over-parameterization problems induced by label inconsistency in small-loss samples. We further enhance the quality of the small-loss samples using our data-adaptive selection strategy. Consequently, our method prevents networks from over-parameterization and enhances their generalization performance. Experimental results demonstrate that our method outperforms state-of-the-art methods for learning with noisy labels on several benchmark datasets. The proposed method produced a remarkable performance gap of up to 12% compared with the previous state-of-the-art methods on CIFAR dataset.
               
Click one of the above tabs to view related content.