LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Complementary Attention-Driven Contrastive Learning With Hard-Sample Exploring for Unsupervised Domain Adaptive Person Re-ID

Photo from wikipedia

Unsupervised domain adaptive (UDA) methods for person re-identification (Re-ID) aim to transfer the knowledge of the labeled source domain to the unlabeled target domain without further annotations, which is challenging… Click to show full abstract

Unsupervised domain adaptive (UDA) methods for person re-identification (Re-ID) aim to transfer the knowledge of the labeled source domain to the unlabeled target domain without further annotations, which is challenging due to the drift of label distribution and the missing of target domain labels. Improving the clustering accuracy of pseudo-labels can help the model fit the target domain. However, the errors of pseudo-label noise will be accumulated during training, which is harmful to the model performance. Moreover, the hard samples can lead to a large gap between intra-class features and a small gap between inter-class features. To address these problems, this paper proposes a complementary attention-driven contrastive learning with hard-sample exploring (CACHE) algorithm. In CACHE, on one hand, the complementary attention module is used to improve the discriminability of the features. The obtained discriminative features can reduce noisy pseudo-labels and improve the clustering accuracy of pseudo labels; On the other hand, we explore the hard samples based on the instance relationship and cluster relationship for contrastive learning. This way can make the cluster more compact. Extensive experiments on three large-scale person re-identification benchmarks demonstrate the effectiveness of the proposed method, which significantly outperforms state-of-the-art methods in terms of mAP and CMC.

Keywords: complementary attention; unsupervised domain; person; domain; contrastive learning

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.