LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

ReSmooth: Detecting and Utilizing OOD Samples when Training with Data Augmentation

Photo by campaign_creators from unsplash

Data augmentation (DA) is a widely used technique for enhancing the training of deep neural networks. Recent DA techniques which achieve state-of-the-art performance always meet the need for diversity in… Click to show full abstract

Data augmentation (DA) is a widely used technique for enhancing the training of deep neural networks. Recent DA techniques which achieve state-of-the-art performance always meet the need for diversity in augmented training samples. However, an augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples and these samples consequently impair the performance. To alleviate this issue, we propose ReSmooth, a framework that first detects OOD samples in augmented samples and then leverages them. To be specific, we first use a Gaussian mixture model (GMM) to fit the loss distribution of both the original and augmented samples and accordingly split these samples into in-distribution (ID) samples and OOD samples. Then we start a new training where ID and OOD samples are incorporated with different smooth labels. By treating ID samples and OOD samples unequally, we can make better use of the diverse augmented data. Furthermore, we incorporate our ReSmooth framework with negative DA (NDA) strategies. By properly handling their intentionally created OOD samples, the classification performance of NDAs is largely ameliorated. Experiments on several classification benchmarks show that ReSmooth can be easily extended to the existing augmentation strategies such as RandAugment (RA), rotate, and jigsaw and improve on them. Our code is available at https://github.com/Chenyang4/ReSmooth.

Keywords: ood; resmooth detecting; data augmentation; ood samples; augmented samples

Journal Title: IEEE transactions on neural networks and learning systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.