Pan-sharpening methods based on deep neural network (DNN) have produced the state-of-the-art results. However, the common information in the panchromatic (PAN) image and the low spatial resolution multispectral (LRMS) image… Click to show full abstract
Pan-sharpening methods based on deep neural network (DNN) have produced the state-of-the-art results. However, the common information in the panchromatic (PAN) image and the low spatial resolution multispectral (LRMS) image is not sufficiently explored. As PAN and LRMS images are collected from the same scene, there exists some common information among them, in addition to their respective unique information. The direct concatenation of extracted features leads to some redundancy in the feature space. To reduce the redundancy among features and exploit the global information in source images, we proposed a novel pan-sharpening method by combining the convolution neural network and transformer. Specifically, PAN and LRMS images are encoded as unique features and common features by the subnetworks consisting of convolution blocks and transformer blocks. Then, the common features are averaged and combined with unique features from source images for the reconstruction of the fused image. To extract accurate common features, the equality constraint is imposed on them. Experimental results show that the proposed method outperforms the state-of-the-art methods on both reduced-scale and full-scale datasets. The source code is available at https://github.com/RSMagneto/TRRNet.
               
Click one of the above tabs to view related content.