Recently, convolutional neural networks have shown superior performance in single-image superresolution. Although existing mean-square-error-based methods achieve high peak signal-to-noise ratio (PSNR), they tend to generate oversmooth results. Generative adversarial network… Click to show full abstract
Recently, convolutional neural networks have shown superior performance in single-image superresolution. Although existing mean-square-error-based methods achieve high peak signal-to-noise ratio (PSNR), they tend to generate oversmooth results. Generative adversarial network (GAN)-based methods can provide high-resolution (HR) images with higher perceptual quality, but produce pseudotextures in images, which generally leads to lower PSNR. Besides, different regions in remote sensing images (RSIs) reflect discrepant surface topography and visual characteristics. This means a uniform reconstruction strategy may not be suitable for all targets in RSIs. To solve these problems, we propose a novel saliency-discriminated GAN for RSI superresolution. First, hierarchical weakly supervised saliency analysis is introduced to compute a saliency map, which is subsequently employed to distinguish the diverse demands of regions in the following generator and discriminator part. Different from previous GANs, the proposed residual dense saliency generator takes saliency maps as a supplementary condition in the generator. Simultaneously, combining the characteristic of RSIs, we design a new paired discriminator to enhance the perceptual quality, which measures the distance between generated images and HR images in salient areas and nonsalient areas, respectively. Comprehensive evaluations validate the superiority of the proposed model.
               
Click one of the above tabs to view related content.