The goal of unsupervised domain adaptation aims to utilize labeled data from source domain to annotate the target-domain data, which has none of the labels. Existing work uses Siamese network-based… Click to show full abstract
The goal of unsupervised domain adaptation aims to utilize labeled data from source domain to annotate the target-domain data, which has none of the labels. Existing work uses Siamese network-based models to minimize the domain discrepancy to learn a domain-invariant feature. Alignment of the second-order statistics (covariances) of source and target distributions has been proven an effective method. Previous papers use Euclidean methods or geodesic methods (log-Euclidean) to measure the distance. However, covariances lay on a Riemannian manifold, and both methods cannot accurately calculate the Riemannian distance, so they cannot align the distribution well. To tackle the distribution alignment problem, this paper proposes mapped correlation alignment (MCA), a novel technique for end-to-end domain adaptation with deep neural networks. This method maps covariances from Riemannian manifold to reproducing kernel Hilbert space and uses Gaussian radial basis function-based positive definite kernels on manifolds to calculate the inner product on reproducing kernel Hilbert space, and then uses Euclidean metric accurate measuring the distance to align the distribution better. This paper builds an end-to-end model to minimize both the classification loss and the MCA loss. The model can be trained efficiently using back-propagation. Experiments show that the MCA method yields the state-of-the-art results on standard domain adaptation data sets.
               
Click one of the above tabs to view related content.