Abstract Multi-scale feature fusion has been proven effective in substantial person re-identification (ReID) works. However, the existing multi-scale feature fusion is based on features of different semantic levels. We propose… Click to show full abstract
Abstract Multi-scale feature fusion has been proven effective in substantial person re-identification (ReID) works. However, the existing multi-scale feature fusion is based on features of different semantic levels. We propose a novel multi-scale and multi-branch feature representation for person ReID, namely Ms-Mb. It merges the features of the same semantic level and integrates attention modules to learn robust and representative feature representations. Through the heterogeneous losses supervision, the final feature representation of the image is more discriminative for person ReID. Sufficient ablation study has proven that the multi-scale feature fusion, the attention module and heterogeneous losses training strategy contribute to the performance boost of Ms-Mb. We have conducted experiments on four mainstream benchmarks including Market1501, DukeMTMC-reID, CUHK03 and MSMT17. Extensive experimental results show that our approach Ms-Mb achieves state-of-the-art performances on Market1501 (Rank-1 = 95.8 % , mAP = 89.9 % ), DukeMTMC-reID (Rank-1 = 90.8 % , mAP = 82.2 % ) and MSMT17 (Rank-1 = 81.9 % , mAP = 59.3 % ) without using additional external data or re-ranking. Our approach yields competitive results compared to the state-of-the-art method on CUHK03 (Rank-1 = 75.4 % , mAP = 72.9 % ) and surpasses the other methods by a large margin.
               
Click one of the above tabs to view related content.