Unsupervised feature selection is an important machine learning task since the manual annotated data are dramatically expensive to obtain and therefore very limited. However, due to the existence of noise… Click to show full abstract
Unsupervised feature selection is an important machine learning task since the manual annotated data are dramatically expensive to obtain and therefore very limited. However, due to the existence of noise and outliers in different data samples, feature selection without the help of discriminant information embedded in the annotated data is quite challenging. To relieve these limitations, we investigate the embedding of spectral learning into a general sparse regression framework for unsupervised feature selection. Generally, the proposed general spectral sparse regression (GSSR) method handles the outlier features by learning the joint sparsity and the noisy features by preserving the local structures of data, jointly. Specifically, GSSR is conducted in two stages. First, the classic sparse dictionary learning method is used to build the bases of original data. After that, the original data are project to the basis space by learning a new representation via GSSR. In GSSR, robust loss function $$\ell _{2,r}{-}{norm}(0
               
Click one of the above tabs to view related content.