LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learning 3-D Face Shape From Diverse Sources With Cross-Domain Face Synthesis

Photo by chrisjoelcampbell from unsplash

Monocular face reconstruction is a significant task in many multimedia applications. However, learning-based methods unequivocally suffer from the lack of large datasets annotated with 3-D ground truth. To tackle this… Click to show full abstract

Monocular face reconstruction is a significant task in many multimedia applications. However, learning-based methods unequivocally suffer from the lack of large datasets annotated with 3-D ground truth. To tackle this problem, we proposed a novel end-to-end 3-D face reconstruction network consisting of a domain-transfer conditional GAN (cGAN) and a face reconstruction network. Our method first uses cGAN to translate the realistic face images to the specific rendered style, with a novel 2-D facial edge consistency loss function to exploit in-the-wild images. The domain-transferred images are then fed into a 3-D face reconstruction network. We further propose a novel reprojection consistency loss to restrict the 3-D face reconstruction network in a self-supervised way. Our approach can be trained with the annotated dataset, synthetic dataset, and in-the-wild images to learn a unified face model. Extensive experiments have demonstrated the effectiveness of our method.

Keywords: domain; face; reconstruction network; face reconstruction

Journal Title: IEEE MultiMedia
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.