Face aging has attracted widespread attention in recent years, but most studies are based on the same emotional situation. Is the same person’s aging in different emotional situations the same?… Click to show full abstract
Face aging has attracted widespread attention in recent years, but most studies are based on the same emotional situation. Is the same person’s aging in different emotional situations the same? To solve the above confusion, this paper proposes a novel face aging model DEF-Net, which consists of two parts: different emotional learnings (Emotion-Net) and face aging (Age-Net). Given a target emotion category, DEF-Net first assists the image from the original dataset to learn the emotion features through Emotion-Net and the generated dataset is used as the inputs of Age-Net. At the same time, multiple loss functions are used to ensure that the crucial information of the original image is not lost. Secondly, Age-Net, which has been pre-trained on the original dataset, began to adopt the generated dataset to learn the aging distribution under different emotions. Designed loss functions are utilized to ensure that the realistic target images generated by Age-Net do not lose the learned emotional characteristics. Finally, extensive experiments are used to verify the performance of DEF-Net. Compared with other state-of-the-art methods: (1) DEF-Net can learn different facial emotions across different datasets and generate corresponding realistic aging images; (2) the results achieved by our DEF-Net are demonstrated to be better than those by the model that performs face aging first and then learns different emotional characteristics.
               
Click one of the above tabs to view related content.