Image-to-image translation translates an image from one domain to another. The goal is to learn the translation relationship between different image domains. Compared with the translation models to be trained… Click to show full abstract
Image-to-image translation translates an image from one domain to another. The goal is to learn the translation relationship between different image domains. Compared with the translation models to be trained using paired training data, CycleGAN has the advantage of learning to translate between domains without paired input–output training examples. However, when using CycleGAN to translate images among multiple domains, the complexity of the model increases nonlinearly with the number of domains. To reduce the model complexity of CycleGAN-based translation models, we assume that there is a hidden space shared by different domains, and this space stores the common features of images. Then, we design a common encoder to learn image features in the hidden space. Based on the hidden space, we propose a translation model that scales linearly with the number of domains. To further improve the common feature representation accuracy, we introduce the adversarial component in the hidden space to learn the common features. We test the proposed models on different datasets, including painting style and season transfer datasets and achieve good results.
               
Click one of the above tabs to view related content.