Hyperspectral sparse unmixing methods estimate abundance of endmembers, assuming spectral library as an overcomplete set of endmembers. In this letter, we present a novel, fast and efficient dictionary pruning approach… Click to show full abstract
Hyperspectral sparse unmixing methods estimate abundance of endmembers, assuming spectral library as an overcomplete set of endmembers. In this letter, we present a novel, fast and efficient dictionary pruning approach for hyperspectral unmixing. We quantify the change in the latent structure of data due to augmentation of spectral library element using covariance similarity measure. Since the covariance matrices form a nonlinear manifold, choosing an appropriate similarity measure is a nontrivial task. We explored prevalent similarity measures, which motivated us to employ Jeffrey’s Kullback–Leibler divergence measure due to its tighter bounds and better noise performance. We also present analytical formulations for faster implementation of covariance similarity. We evaluate the performance of dictionary pruning algorithms on several synthetic and real hyperspectral images and demonstrate the proficiency of our proposed work in diverse scenarios.
               
Click one of the above tabs to view related content.