Recently, the use of a deep autoencoder-based method in blind spectral unmixing has attracted great attention as the method can achieve superior performance. However, most autoencoder-based unmixing methods use non-structured… Click to show full abstract
Recently, the use of a deep autoencoder-based method in blind spectral unmixing has attracted great attention as the method can achieve superior performance. However, most autoencoder-based unmixing methods use non-structured reconstruction loss to train networks, leading to the ignorance of band-to-band-dependent characteristics and fine-grained information. To cope with this issue, we propose a general perceptual loss-constrained adversarial autoencoder network for hyperspectral unmixing. Specifically, the adversarial training process is used to update our framework. The discriminate network is found to be efficient in discovering the discrepancy between the reconstructed pixels and their corresponding ground truth. Moreover, the general perceptual loss is combined with the adversarial loss to further improve the consistency of high-level representations. Ablation studies verify the effectiveness of the proposed components of our framework, and experiments with both synthetic and real data illustrate the superiority of our framework when compared with other competing methods.
               
Click one of the above tabs to view related content.