Principal component analysis (PCA) and linear discriminant analysis (LDA) have been extended to be a group of classical methods in dimensionality reduction for unsupervised and supervised learning, respectively. However, compared… Click to show full abstract
Principal component analysis (PCA) and linear discriminant analysis (LDA) have been extended to be a group of classical methods in dimensionality reduction for unsupervised and supervised learning, respectively. However, compared with the PCA, the LDA loses several advantages because of the singularity of its between-class scatter, resulting in singular mapping and restriction of reduced dimension. In this paper, we propose a dimensionality reduction method by defining a full-rank between-class scatter, called reversible discriminant analysis (RDA). Based on the new defined between-class scatter matrix, our RDA obtains a nonsingular mapping. Thus, RDA can reduce the sample space to arbitrary dimension and the mapped sample can be recovered. RDA is also extended to kernel based dimensionality reduction. In addition, PCA and LDA are the special cases of our RDA. Experiments on the benchmark and real problems confirm the effectiveness of the proposed method.
               
Click one of the above tabs to view related content.