RGB-Infrared cross-modal person re-identification (Re-ID) has drawn increasing attention due to its application value in practice. Most of the current works rely on a supervised training manner. However, in real-world… Click to show full abstract
RGB-Infrared cross-modal person re-identification (Re-ID) has drawn increasing attention due to its application value in practice. Most of the current works rely on a supervised training manner. However, in real-world applications, manual collection of pair-wise RGB-Infrared (IR) person data is labor-intensive and time-consuming. Moreover, when a trained model is directly used in another domain, there is usually a significant performance drop. To overcome the above problems, we make the first attempt to transfer the learned model to a new RGB-IR domain which is unlabeled. The practical problem covers two kinds of challenges, i.e., cross-modal (RGB-Infrared) and cross-domain (different dataset) person Re-ID. Previous works have often considered only one of them either cross-modal or cross-domain. In this work, we propose a dual alignment network (DAN) to solve the RGB-Infrared cross-modal cross-domain person Re-ID problem. This network consists of three parts: Domain Adversarial Alignment component (DAA), Pseudo Label Generation module for target domain (PLG), and Cross-Modal Alignment component (CMA). These three modules complement and promote the model to learn domain-invariant and modality-invariant person representations. Further, we propose a protocol of cross-modal cross-domain person Re-ID by synthesizing target domains by adding random noise, adjusting the lighting intensity, and changing the background color, respectively. Experiments on real and synthetic datasets under the same cross-modalities across domains demonstrate the effectiveness of our method.
               
Click one of the above tabs to view related content.