Can we automatically learn meaningful semantic feature representations when training labels are absent? Several recent unsupervised deep learning approaches have attempted to tackle this problem by solving the data reconstruction… Click to show full abstract
Can we automatically learn meaningful semantic feature representations when training labels are absent? Several recent unsupervised deep learning approaches have attempted to tackle this problem by solving the data reconstruction task. However, these methods can easily latch on low-level features. To solve this problem, we propose an end-to-end spectral-spatial semantic feature learning network (S3FN) for unsupervised deep semantic feature extraction from HSIs. Our main idea is to learn spectral-spatial features from high-level semantic perspective. First, we utilize the feature transformation to obtain two feature descriptions of the same source data from different views. Then, we propose the spectral-spatial feature learning network to project the two feature descriptions into the deep embedding space. Subsequently, a contrastive loss function is introduced to align the two projected features, which should have the same implied semantic meaning. The proposed S3FN learns the spectral and spatial features separately, and then merges them. Finally, the learned spectral-spatial features by S3FN are processed by a classifier to evaluate their effectiveness. Experimental results on three publicly available HSI data sets show that our proposed S3FN can produce promising classification results with a lower time cost than other state-of-the-art (SOTA) deep learning-based unsupervised feature extraction methods.
               
Click one of the above tabs to view related content.