Recently, Generative Adversarial Network (GAN) based approaches are applied in facial attribute translation. However, many tasks, i.e. multi facial attributes translation and background invariance, are not well handled in the… Click to show full abstract
Recently, Generative Adversarial Network (GAN) based approaches are applied in facial attribute translation. However, many tasks, i.e. multi facial attributes translation and background invariance, are not well handled in the literature. In this paper, we propose a novel GAN-based method that aims to get the target image that performs better within modifying one or more facial attributes in a single model. The model generator learns multi-points by inputting a re-coded transfer vector, ensuring the single model could learn multiple attributes simultaneously. It also optimizes the cycle loss to enhance the efficiency of transferring multi attributes. Moreover, the method uses the adaptive parameter to improve the calculation method of the loss function of the residual image. The results are also compared with the StarGAN v2, which is the current state-of-the-art model to prove the effectiveness and advancedness. Experiments show that our method has a satisfactory performance in multi facial attributes translation.
               
Click one of the above tabs to view related content.