Video-based person re-identification attracts wide attention because it plays a crucial role for many applications in the video surveillance. The task of video-based person re-identification is to match image sequences… Click to show full abstract
Video-based person re-identification attracts wide attention because it plays a crucial role for many applications in the video surveillance. The task of video-based person re-identification is to match image sequences of the pedestrian recorded by non-overlapping cameras. Like many visual recognition problems, variations in pose, viewpoints, illumination, and occlusion make this task non-trivial. Aiming at increasing the robustness of features to variations and occlusion, this paper designs an aligned multi-part image model inspired by human visual attention mechanism. This model performs a pose estimation method to align the pedestrians. Then, it divides the images to extract multi-part appearance features. Besides, we present independent metric learning to combine the multi-part appearance and spatial-temporal features, which obtains several metric kernels by feeding these features into distance metric learning respectively. These kernels are fused with the weights learned by the attention measure. The novel way of features fusion can achieve better functional complementarity of these features. In experiments, we analyze the effectiveness of the major components. Extensive experiments on two public benchmark datasets, i.e., the iLIDS-VID and PRID-2011 datasets, demonstrate the effectiveness of the proposed method.
               
Click one of the above tabs to view related content.