Although the single-domain person re-identification (Re-ID) method has achieved great accuracy, the dependence on the label in the same image domain severely limits the scalability of this method. Therefore, cross-domain… Click to show full abstract
Although the single-domain person re-identification (Re-ID) method has achieved great accuracy, the dependence on the label in the same image domain severely limits the scalability of this method. Therefore, cross-domain Re-ID has received more and more attention. In this paper, a novel cross-domain Re-ID method combining supervised and unsupervised learning is proposed, which includes two models: a triple-condition generative adversarial network (TC-GAN) and a dual-task feature extraction network (DFE-Net). We first use TC-GAN to generate labeled images with the target style, and then we combine supervised and unsupervised learning to optimize DFE-Net. Specifically, we use labeled generated data for supervised learning. In addition, we mine effective information in the target data from two perspectives for unsupervised learning. To effectively combine the two types of learning, we design a dynamic weighting function to dynamically adjust the weights of these two approaches. To verify the validity of TC-GAN, DFE-Net, and the dynamic weight function, we conduct multiple experiments on Market-1501 and DukeMTMC-reID. The experimental results show that the dynamic weight function can improve the performance of the models, and our method is better than many state-of-the-art methods.
               
Click one of the above tabs to view related content.