LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

VSLN: View‐aware sphere learning network for cross‐view vehicle re‐identification

Photo from wikipedia

Cross‐view vehicle Reidentification (ReID) has attracted widespread attention as an increasingly important vision task in intelligent transportation and urban surveillance. Benefiting from Convolutional Neural Network (CNN), recent studies have promoted… Click to show full abstract

Cross‐view vehicle Reidentification (ReID) has attracted widespread attention as an increasingly important vision task in intelligent transportation and urban surveillance. Benefiting from Convolutional Neural Network (CNN), recent studies have promoted the development of vehicle ReID by extracting discriminative local features. However, two fundamental challenges of small interclass discrepancy caused by different views and large intraclass distance caused by similar appearance still hinder the performance of cross‐view vehicle ReID. In this paper, a novel View‐aware Sphere Learning Network (VSLN) is proposed to alleviate the above issues while maintaining the merits of CNN‐based approaches to generate view‐aware sphere‐based features. First, a Sphere Feature Embedding Network (SFEN) is proposed to constrain the images into hypersphere for extracting sphere features. On the other hand, this study presents a sphere similarity triple loss to help SFEN concentrate more on robust and discriminative vehicle parts. Second, since the vehicle images are usually captured from different viewpoints, this study further extends SFEN by introducing a Vehicle Viewpoint Predictor (VVP) combined with global attention mechanism to enlarge the discrepancy of interclass and shorten the distance of intraclass. Moreover, a city‐scale data set, named Vehicle from Different Viewpoints, containing image‐level viewpoint labels, is collected for training VVP. As a result, the proposed VLSN can achieve 96.31% Top‐1 accuracy and 79.46% Top‐1 accuracy on VeRi‐776 and VRIC data sets, respectively. Overall, extensive experimental results on two benchmark data sets show that the proposed VSLN outperforms state‐of‐the‐art methods.

Keywords: network; view; view aware; view vehicle; cross view; vehicle

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.