Over the past few decades, a large family of subspace learning algorithms based on dictionary learning have been designed to provide different solutions to learn subspace feature. Most of them… Click to show full abstract
Over the past few decades, a large family of subspace learning algorithms based on dictionary learning have been designed to provide different solutions to learn subspace feature. Most of them are unsupervised algorithms that are applied to data without label scenarios. It is worth noting that the label information is available in some application scenarios such as face recognition where the above-mentioned dimensionality reduction techniques cannot employ the label information to improve their performance. Therefore, under these labeled scenarios, it is necessary to transform an unsupervised subspace learning algorithm into the corresponding supervised algorithm to improve the performance. In this paper, we propose an approach which can be used as a general way for developing a corresponding supervised algorithm based on any unsupervised subspace learning algorithm using sparse representation. Moreover, by utilizing the proposed approach, we achieve a new supervised subspace learning algorithm named supervised principal coefficients embedding (SPCE). We show that SPCE establishes the advantages over the state-of-the-art supervised subspace learning algorithm.
               
Click one of the above tabs to view related content.