LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Person Re-Identification by Contour Sketch Under Moderate Clothing Change

Photo from wikipedia

Person re-identification (re-id), the process of matching pedestrian images across different camera views, is an important task in visual surveillance. Substantial development of re-id has recently been observed, and the… Click to show full abstract

Person re-identification (re-id), the process of matching pedestrian images across different camera views, is an important task in visual surveillance. Substantial development of re-id has recently been observed, and the majority of existing models are largely dependent on color appearance and assume that pedestrians do not change their clothes across camera views. This limitation, however, can be an issue for re-id when tracking a person at different places and at different time if that person (e.g., a criminal suspect) changes his/her clothes, causing most existing methods to fail, since they are heavily relying on color appearance, and thus, they are inclined to match a person to another person wearing similar clothes. In this work, we call the person re-id under clothing change the “cross-clothes person re-id.” In particular, we consider the case when a person only changes his clothes moderately as a first attempt at solving this problem based on visible light images; that is, we assume that a person wears clothes of a similar thickness, and thus the shape of a person would not change significantly when the weather does not change substantially within a short period of time. We perform cross-clothes person re-id based on a contour sketch of person image to take advantage of the shape of the human body instead of color information for extracting features that are robust to moderate clothing change. To select/sample more reliable and discriminative curve patterns on a body contour sketch, we introduce a learning-based spatial polar transformation (SPT) layer in the deep neural network to transform contour sketch images for extracting reliable and discriminant convolutional neural network (CNN) features in a polar coordinate space. An angle-specific extractor (ASE) is applied in the following layers to extract more fine-grained discriminant angle-specific features. By varying the sampling range of the SPT, we develop a multistream network for aggregating multi-granularity features to better identify a person. Due to the lack of a large-scale dataset for cross-clothes person re-id, we contribute a new dataset that consists of 33,698 images from 221 identities. Our experiments illustrate the challenges of cross-clothes person re-id and demonstrate the effectiveness of our proposed method.

Keywords: person; clothing change; contour sketch

Journal Title: IEEE Transactions on Pattern Analysis and Machine Intelligence
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.