Abstract The extensive deployment of surveillance cameras in public places, such as subway stations and shopping malls, necessitates automated visual-data processing approaches to match pedestrians across non-overlapping multiple cameras. However,… Click to show full abstract
Abstract The extensive deployment of surveillance cameras in public places, such as subway stations and shopping malls, necessitates automated visual-data processing approaches to match pedestrians across non-overlapping multiple cameras. However, due to the insufficient number of labeled training samples in real surveillance scene, it is difficult to train an effective deep neural network for cross-camera pedestrian recognition. Moreover, the cross-camera variation in viewpoint, illumination, and background makes the task even more challenging. To address these issues, in this paper we propose to transfer the parameters of a pre-trained network to our target network and then update the parameters adaptively using training samples from the target domain. More importantly, we develop new network structures that are specially tailored for cross-camera pedestrian recognition task, and implement a simple yet effective multi-level feature fusion method that yield more discriminative and robust features for pedestrian recognition. Specifically, rather than conventionally perform classification on the single-level feature of the last feature layer, we instead utilize multi-level feature by associating feature visualization with multi-level feature fusion. As another contribution, we have published our codes and extracted features to facilitate further research. Extensive experiments are conducted on WARD, PRID and MARS datasets, we show that the proposed method consistently outperforms state-of-the-arts.
               
Click one of the above tabs to view related content.