LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Generative Adversarial Network for Multi Facial Attributes Translation

Photo from wikipedia

Recently, Generative Adversarial Network (GAN) based approaches are applied in facial attribute translation. However, many tasks, i.e. multi facial attributes translation and background invariance, are not well handled in the… Click to show full abstract

Recently, Generative Adversarial Network (GAN) based approaches are applied in facial attribute translation. However, many tasks, i.e. multi facial attributes translation and background invariance, are not well handled in the literature. In this paper, we propose a novel GAN-based method that aims to get the target image that performs better within modifying one or more facial attributes in a single model. The model generator learns multi-points by inputting a re-coded transfer vector, ensuring the single model could learn multiple attributes simultaneously. It also optimizes the cycle loss to enhance the efficiency of transferring multi attributes. Moreover, the method uses the adaptive parameter to improve the calculation method of the loss function of the residual image. The results are also compared with the StarGAN v2, which is the current state-of-the-art model to prove the effectiveness and advancedness. Experiments show that our method has a satisfactory performance in multi facial attributes translation.

Keywords: facial attributes; generative adversarial; attributes translation; multi facial

Journal Title: IEEE Access
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.