Deep-learning-based pansharpening methods have achieved remarkable results due to their powerful feature representation ability. However, the existing deep-learning-based pansharpening methods not only lack information exchange and sharing between features of… Click to show full abstract
Deep-learning-based pansharpening methods have achieved remarkable results due to their powerful feature representation ability. However, the existing deep-learning-based pansharpening methods not only lack information exchange and sharing between features of different resolutions but also cannot effectively use the residual information at different levels. These disadvantages may lead to the loss of spatial information and spectral information in the pansharpened image. To address the above problems, we propose a novel dual-stream convolutional neural network with residual information enhancement (DSCNN-RIE) for pansharpening. The proposed network is mainly composed of a set of dual-stream information complementation blocks (DSICBs), which can extract various spatial details at two different resolutions using convolutional filters of various sizes simultaneously, and can transfer complementary information effectively between two different resolutions. Furthermore, to improve the learning ability of the network and enhance the feature extraction, an RIE strategy is presented to stack different levels of residuals into the outputs of cascaded DSICBs. The final pansharpened image is obtained by integrating the extracted features using the shallow feature information of the source images. Experimental results on three datasets demonstrate that DSCNN-RIE outperforms ten other state-of-the-art pansharpening methods in both subjective and objective image-quality evaluations.
               
Click one of the above tabs to view related content.