LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Global and Part Feature Fusion for Cross-Modality Person Re-Identification

Photo by thanti_riess from unsplash

Visible-Infrared person re-identification (VI Re-ID) is a challenging but practical task that aims at matching pedestrian images between the visible(daytime) modality and the infrared(nighttime) modality, playing an important role in… Click to show full abstract

Visible-Infrared person re-identification (VI Re-ID) is a challenging but practical task that aims at matching pedestrian images between the visible(daytime) modality and the infrared(nighttime) modality, playing an important role in criminal investigation and intelligent video surveillance applications. Numerous previous studies focused on alleviating the modality discrepancy and obtaining discriminating features by devising complex networks for VI Re-ID, but a cumbersome network structure is not suitable for practical industrial applications. In this paper, we propose a novel fusion method of global and part features to extract distinguishing features and alleviate cross-modality differences, named Global and Part Feature Fusion network(GPFF), which has not been well studied in the current literature. Specifically, we first adopt a dual-stream ResNet50 as a backbone network to alleviate the modality discrepancy. Then, We explore how to fuse global and local features to obtain discriminative features. Finally, we apply a heterogeneous center triplet loss(hetero-center triplet loss) instead of traditional triplet loss to guide sample center learning. Our proposed approach is simple but effective, and can remarkably boost the performance of VI Re-ID. The results of experiments on two public datasets(SYSU-MM01 and RegDB) demonstrate that our approach is superior to the state-of-the-art methods. Through experiments, we find that the effective fusion of global features and local features plays an important role in extracting discriminative features.

Keywords: global part; fusion; cross modality; modality; person identification

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.