LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Self-Supervised Animation Synthesis Through Adversarial Training

Photo from wikipedia

In this paper, we propose a novel deep generative model for image animation synthesis. Based on self-supervised learning and adversarial training, the model can find labeling rules and mark them… Click to show full abstract

In this paper, we propose a novel deep generative model for image animation synthesis. Based on self-supervised learning and adversarial training, the model can find labeling rules and mark them without origin sample labels. In addition, our model can generate continuous changing images based on the automatically labels learning. The labels learning model can be implemented on a large number of out-of-order samples to generate two types of pseudo-labels, discrete labels and continuous labels. The discrete labels can generate different animation clips, and the continuous labels can generate different frames in the same clip. Embedding pseudo-labels with latent variables into latent space, our model discovers regularities and features from latent space. Animation features are fully characterized by the pseudo-labels learned from the self-supervised module. Using upgraded adversarial training steps, the model learns to map animation features to pseudo-labels from the latent space and then organizes pseudo-labels embedding into latent variables to generate animation features. By adapting dimensions of pseudo-labels, we match fine features with latent variables. Such as using the two types of pseudo-labels, our model can also generate different styles of videos from the same dataset. The specific implementation tricks depend on the different pseudo-label dimensions and the number of pseudo-label dimensions. Comparing the results of our model with other state-of-the-art approaches, the model does not use complicated components, such as 3D convolution layers and recurrent neural networks. Our experimental results show that an appropriate number of the pseudo-label dimensions can better characterize animation features. In this case, an animation which reached human-level perception can be synthesized. The performance of animation synthesis has reached relatively superior results on several challenging datasets.

Keywords: self supervised; adversarial training; animation synthesis; animation; model; pseudo labels

Journal Title: IEEE Access
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.