Multiview multilabel (MVML) learning deals with objects with diverse feature vectors and rich semantics. Existing methods are built on a shared latent space among multiple views. However, they do not… Click to show full abstract
Multiview multilabel (MVML) learning deals with objects with diverse feature vectors and rich semantics. Existing methods are built on a shared latent space among multiple views. However, they do not well capture semantic consistency and view interactions. What’s more, they neglect different contributions of each view to multilabel learning. To address these issues, a novel Personalized RElation rEfinement Network (PREEN) is proposed to sufficiently exploit associated relationships. First, common and specific information is learned in an adversarial way to prevent them from interfering with each other. Then, we adapt the standard transformer to capture cross-view interactions. Similarly, we design label-specific transformers to model label-view dependence, which associates each label with relevant views separately. Eventually, we develop an interlabel attention mechanism to exploit label correlations and refine complementary information from other labels dynamically. Extensive experiments on one real and seven public MVML datasets validate the effectiveness of our proposed PREEN.
               
Click one of the above tabs to view related content.