LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Meta-Reweighted Regularization for Unsupervised Domain Adaptation

Photo by prochurchmedia from unsplash

Unsupervised domain adaptation (UDA) enables knowledge transfer from a labeled source domain to an unlabeled target domain by reducing the cross-domain distribution discrepancy, and the adversarial learning based paradigm has… Click to show full abstract

Unsupervised domain adaptation (UDA) enables knowledge transfer from a labeled source domain to an unlabeled target domain by reducing the cross-domain distribution discrepancy, and the adversarial learning based paradigm has achieved remarkable success. On top of the derived domain-invariant feature representations, a promising stream of recent works seeks to further regularize the classification decision boundary via self-training to learn target adaptive classifier with pseudo-labeled target samples. However, since the pseudo labels are inevitably noisy, most of prior methods focus on manually designing elaborate target selection algorithms or optimization objectives to combat the negative effect caused by the incorrect pseudo labels. Different from them, in this paper, we propose a simple and powerful meta-learning based target-reweighting regularization algorithm, called MetaReg, which regularizes the model training by learning to reweight the noisy pseudo-labeled target samples. Specifically, MetaReg is motivated by the intuition that an ideal target classifier trained on correct target pseudo labels should make small classification errors on target-like source samples. Therefore, we explicitly define a meta reweighting problem that aims to find the optimal weights for different target pseudo labels by minimizing the classification loss on a designed validation set, a class-balanced set consisting of source samples that are most similar to target ones. Note that the optimization problem can be solved efficiently with a simplified approximation technique. As a result, the automatically learned optimal weights are utilized to reweight pseudo-labeled target samples, and regularize the model learning by target supervision with the learned different importance. Comprehensive experiments on several cross-domain image and text datasets verify that MetaReg could outperform the non-regularized UDA counterparts with state-of-the-art performance. Code is available at https://github.com/BIT-DA/MetaReg.

Keywords: domain adaptation; pseudo labels; unsupervised domain; domain; target

Journal Title: IEEE Transactions on Knowledge and Data Engineering
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.