LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Domain Generalization via Adversarially Learned Novel Domains

Photo by victorfreitas from unsplash

This study focuses on the domain generalization task, which aims to learn a model that generalizes to unseen domains by utilizing multiple training domains. More specifically, we follow the idea… Click to show full abstract

This study focuses on the domain generalization task, which aims to learn a model that generalizes to unseen domains by utilizing multiple training domains. More specifically, we follow the idea of adversarial data augmentation, which aims to synthesize and augment training data with “hard” domains to improve the model’s domain generalization ability. However, previous studies augmented training data only with samples similar to the training data, resulting in limited generalization ability. To alleviate this issue, we propose a novel adversarial data augmentation method, termed GADA (generative adversarial domain augmentation), which employs an image-to-image translation model to obtain a distribution of novel domains that are semantically different from the training domains, and, at the same time, hard to classify. Evaluation and further analysis suggest that GADA fits our expectation; adversarial data augmentation with semantically different samples leads to better domain generalization performance.

Keywords: training; domain generalization; novel domains; augmentation; generalization

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.