Person re-identification (ReID), which aims at matching individuals across non-overlapping cameras, has attracted much attention in the field of computer vision due to its research significance and potential applications. Triplet… Click to show full abstract
Person re-identification (ReID), which aims at matching individuals across non-overlapping cameras, has attracted much attention in the field of computer vision due to its research significance and potential applications. Triplet loss-based CNN models have been very successful for person ReID, which aims to optimize the feature embedding space such that the distances between samples with the same identity are much shorter than those of samples with different identities. Researchers have found that hard triplets’ mining is crucial for the success of the triplet loss. In this paper, motivated by focal loss designed for the classification model, we propose the triplet focal loss for person ReID. Triplet focal loss can up-weight the hard triplets’ training samples and relatively down-weight the easy triplets adaptively via simply projecting the original distance in the Euclidean space to an exponential kernel space. We conduct experiments on three largest benchmark datasets currently available for person ReID, namely, Market-1501, DukeMTMC-ReID, and CUHK03, and the experimental results verify that the proposed triplet focal loss can greatly outperform the traditional triplet loss and achieve competitive performances with the representative state-of-the-art methods.
               
Click one of the above tabs to view related content.