In learning-based image processing a model that is learned in one domain often performs poorly in another since the image samples originate from different sources and thus have different distributions.… Click to show full abstract
In learning-based image processing a model that is learned in one domain often performs poorly in another since the image samples originate from different sources and thus have different distributions. Domain adaptation techniques alleviate the problem of domain shift by learning transferable knowledge from the source domain to the target domain. Zero-shot domain adaptation (ZSDA) refers to a category of challenging tasks in which no target-domain sample for the task of interest is accessible for training. To address this challenge, we propose a simple but effective method that is based on the strategy of domain shift preservation across tasks. First, we learn the shift between the source domain and the target domain from an irrelevant task for which sufficient data samples from both domains are available. Then, we transfer the domain shift to the task of interest under the hypothesis that different tasks may share the domain shift for a specified pair of domains. Via this strategy, we can learn a model for the unseen target domain of the task of interest. Our method uses two coupled generative adversarial networks (CoGANs) to capture the joint distribution of data samples in dual-domains and another generative adversarial network (GAN) to explicitly model the domain shift. The experimental results on image classification and semantic segmentation demonstrate the satisfactory performance of our method in transferring various kinds of domain shifts across tasks.
               
Click one of the above tabs to view related content.