Abstract Nonnegative matrix factorization (NMF) has attracted more and more attention due to its wide applications in computer vision, information retrieval, and machine learning. In contrast to the original NMF… Click to show full abstract
Abstract Nonnegative matrix factorization (NMF) has attracted more and more attention due to its wide applications in computer vision, information retrieval, and machine learning. In contrast to the original NMF and its variants, this paper proposes a novel unsupervised learning framework, called robust structured nonnegative matrix factorization (RSNMF) which respects both global and local structures of the data space. Specifically, to learn a discriminative representation, RSNMF explores both the global structure via considering the data variance and the local structure via exploiting the data neighborhood. To well address the problem of noise and outliers, it imposes joint L2,1-norm minimization on both the loss function of NMF and the regularization of the basis matrix. The geometric structure and the joint L2,1-norm are formulated as an optimization model, which is solved by the proposed iterative algorithm. Finally, the convergence of RSNMF is analyzed theoretically and empirically. The experimental results on real-world data sets show the effectiveness of our proposed algorithm in comparison to state-of-the-art algorithms.
               
Click one of the above tabs to view related content.