Local Fisher Discriminant Analysis (LFDA) is a supervised feature extraction technique that proved to be efficient in reducing several types of data. However, it depends on the number of samples… Click to show full abstract
Local Fisher Discriminant Analysis (LFDA) is a supervised feature extraction technique that proved to be efficient in reducing several types of data. However, it depends on the number of samples per class in a way that can lead, when classes are too large, to a consumption of all the memory of a commodity hardware, or to a disability to even run. To work around this limit, we hereby propose to introduce a parameter that adapts LFDA to the data’s classes while accounting for the available resources on the used machine. In fact, according to this parameter, LFDA will consider a larger class as a set of smaller sub-classes and will process these latter instead of the larger one. We are calling our proposed optimization the class-adapted LFDA, noted caLFDA. We also propose a Python implementation of LFDA and prove it more effective than the existent MATLAB implementation. To assess the efficiency of caLFDA, we applied it to reduce several hyperspectral images and compared the results of classifying the reduced images to the ones we get when using the original LFDA to reduce the data. When the hyperspectral images are too large for LFDA to be able to reduce them, we compare caLFDA’s results to the ones we get with the most commonly used Principle Component Analysis (PCA).
               
Click one of the above tabs to view related content.