AbstractGenerative adversarial networks (GANs) have shown impressive power in the field of machine learning. Traditional GANs have focused on unsupervised learning tasks. In recent years, conditional GANs that can generate… Click to show full abstract
AbstractGenerative adversarial networks (GANs) have shown impressive power in the field of machine learning. Traditional GANs have focused on unsupervised learning tasks. In recent years, conditional GANs that can generate data with labels have been proposed in semi-supervised learning and have achieved better image quality than traditional GANs. Conditional GANs, however, generally only minimize the difference between marginal distributions of real and generated data, neglecting the difference with respect to each class of the data. To address this challenge, we propose the GAN with joint distribution moment matching (JDMM-GAN) for matching the joint distribution based on maximum mean discrepancy, which minimizes the differences of both the marginal and conditional distributions. The learning procedure is iteratively conducted by the stochastic gradient descent and back-propagation. We evaluate JDMM-GAN on several benchmark datasets, including MNIST, CIFAR-10 and the Extended Yale Face. Compared with the state-of-the-art GANs, JDMM-GAN generates more realistic images and achieves the best inception score for CIFAR-10 dataset.
               
Click one of the above tabs to view related content.