Occluded person re-identification is one of the most challenging tasks in security surveillance. Most existing methods focus on extracting human body features from occluded pedestrian images. This paper prioritizes a… Click to show full abstract
Occluded person re-identification is one of the most challenging tasks in security surveillance. Most existing methods focus on extracting human body features from occluded pedestrian images. This paper prioritizes a difference between occluded and non-occluded person re-ID: When computing the similarity between a holistic pedestrian image and an occluded pedestrian image, a certain part of the human body in this holistic image can be distractive for pedestrian retrieval. To solve this problem, we propose an occluded person re-ID framework named attribute-based shift attention network (ASAN). First, unlike other methods that use off-the-shelf tools to locate pedestrian body parts in the occluded images, we design an attribute-guided occlusion-sensitive pedestrian segmentation (AOPS) module. AOPS is a weakly supervised method that leverages the semantic-level attribute annotations in person re-ID datasets. Second, guided by the pedestrian masks provided by AOPS, a shift feature adaption (SFA) module extracts the visible part of the human body feature in a part-based manner. After that, a visible region matching (VRM) algorithm is proposed to filter out the interfer-ence information in the holistic person images during the retrieval phase and further purify the representation of pedestrian features. Extensive experiments with ablation analysis demonstrate our method’s effectiveness. And the state-of-the-art results are achieved on four occluded datasets Partial-REID, Partial-iLIDS, Occluded-DukeMTMC, and Occluded REID. Moreover, the experiments on two holistic person re-ID datasets Market-1501 and DukeMTMC-reID, and a vehicle re-ID dataset VeRi-776 show that ASAN also has a good generality.
               
Click one of the above tabs to view related content.