Multilabel image annotation attracts a lot of research interest due to its practicability in multimedia and computer vision fields, while the need for a large amount of labeled training data… Click to show full abstract
Multilabel image annotation attracts a lot of research interest due to its practicability in multimedia and computer vision fields, while the need for a large amount of labeled training data to achieve promising performance makes it a challenging task. Fortunately, unlabeled and relevant data are widely available and these data can be used to serve the annotation task. To this end, we propose a novel adaptive hypergraph learning (AHL) method for multilabel image annotation in a semisupervised way, in which both the limited labeled data and abundant unlabeled data are utilized to facilitate the annotation performance. In detail, we seek a multilabel propagation scheme by learning a hypergraph which is used to preserve the local geometric structures of data in a high-order manner. Meanwhile, a feature projection is integrated into AHL to obtain a latent feature space where unlabeled instances can be effectively and robustly assigned with multiple labels. Experiments on six widely used image datasets are conducted to evaluate our model and the results demonstrate that the proposed AHL outperforms other state-of-the-art semisupervised methods.
               
Click one of the above tabs to view related content.