LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Multi-view metric learning based on KL-divergence for similarity measurement

Photo from wikipedia

In the past decades, we have witnessed a surge of interests of learning distance metrics for various image processing tasks. However, facing with features from multiple views, most metric learning… Click to show full abstract

In the past decades, we have witnessed a surge of interests of learning distance metrics for various image processing tasks. However, facing with features from multiple views, most metric learning methods fail to integrate compatible and complementary information from multi-view features to train a common distance metric. Most information is thrown away by those single-view methods, which affects their performances severely. Therefore, how to fully exploit information from multiple views to construct an optimal distance metric is of vital importance but challenging. To address this issue, this paper constructs a multi-view metric learning method which utilizes KL-divergences to integrate features from multiple views. Minimizing KL-divergence between features from different views can lead to the consistency of multiple views, which enables MML to exploit information from multiple views. Various experiments on several benchmark multi-view datasets have verified the excellent performance of this novel method.

Keywords: information; multiple views; view metric; metric learning; view; multi view

Journal Title: Neurocomputing
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.