Deep learning (DL)-based methods have been widely used in pansharpening and have made great progress. To increase the accuracy, the DL-based model structures can be improved by introducing the multiresolution… Click to show full abstract
Deep learning (DL)-based methods have been widely used in pansharpening and have made great progress. To increase the accuracy, the DL-based model structures can be improved by introducing the multiresolution information and self-similarity of the panchromatic (PAN) image and multispectral (MS) images, respectively, but few methods exist to fully exploit both the characteristics in the constructed models. To solve the above problem, this letter proposes a PAN-guided multiresolution fusion (PMRF) network based on Swin transformer (ST). In the proposed PMRF network, the multiresolution features extracted from the PAN image are fused with the features extracted from the MS images to guide the level-by-level improvement in the spatial resolution. Furthermore, a ST-based residual self-attention (STRA) module is designed to combine the advantages of ST and residual learning to fully exploit the self-similarity to enhance the feature representation. Experimental results show that the proposed method outperforms the state-of-the-art methods in both spatial enhancement and spectral preservation.
               
Click one of the above tabs to view related content.