Deep neural networks can learn powerful and discriminative representations from a large number of labeled samples. However, it is typically costly to collect and annotate large-scale datasets, which limits the… Click to show full abstract
Deep neural networks can learn powerful and discriminative representations from a large number of labeled samples. However, it is typically costly to collect and annotate large-scale datasets, which limits the applications of deep learning in many real-world scenarios. Domain adaptation, as an option to compensate for the lack of labeled data, has attracted much attention in the community of machine learning. Although a mass of methods for domain adaptation has been presented, many of them simply focus on matching the distribution of the source and target feature representations, which may fail to encode useful information about the target domain. In order to learn invariant and discriminative representations for both domains, we propose a Cross-Domain Minimization with Deep Autoencoder method for unsupervised domain adaptation, which simultaneously learns label prediction on the source domain and input reconstruction on the target domain using shared feature representations aligned with correlation alignment in a unified framework. Furthermore, inspired by adversarial training and cluster assumption, a task-specific class label discriminator is incorporated to confuse the predicted target class labels with samples draw from categorical distribution, which can be regarded as entropy minimization regularization. Extensive empirical results demonstrate the superiority of our approach over the state-of-the-art unsupervised adaptation methods on both visual and non-visual cross-domain adaptation tasks.
               
Click one of the above tabs to view related content.