LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Views Meet Labels: Personalized Relation Refinement Network for Multiview Multilabel Learning

Multiview multilabel (MVML) learning deals with objects with diverse feature vectors and rich semantics. Existing methods are built on a shared latent space among multiple views. However, they do not… Click to show full abstract

Multiview multilabel (MVML) learning deals with objects with diverse feature vectors and rich semantics. Existing methods are built on a shared latent space among multiple views. However, they do not well capture semantic consistency and view interactions. What’s more, they neglect different contributions of each view to multilabel learning. To address these issues, a novel Personalized RElation rEfinement Network (PREEN) is proposed to sufficiently exploit associated relationships. First, common and specific information is learned in an adversarial way to prevent them from interfering with each other. Then, we adapt the standard transformer to capture cross-view interactions. Similarly, we design label-specific transformers to model label-view dependence, which associates each label with relevant views separately. Eventually, we develop an interlabel attention mechanism to exploit label correlations and refine complementary information from other labels dynamically. Extensive experiments on one real and seven public MVML datasets validate the effectiveness of our proposed PREEN.

Keywords: multiview multilabel; personalized relation; refinement network; relation refinement; multilabel learning

Journal Title: IEEE MultiMedia
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.