The multispectral image captured by the Sentinel-2 satellite contains 13 spectral bands with different resolutions, which may hider some of the subsequent applications. In this article, we design a novel… Click to show full abstract
The multispectral image captured by the Sentinel-2 satellite contains 13 spectral bands with different resolutions, which may hider some of the subsequent applications. In this article, we design a novel method to super-resolve 20- and 60-m coarse bands of the S2 images to 10 m, achieving a complete dataset at the 10-m resolution. To tackle this inverse problem, we leverage the deep image prior expressed by the convolution neural network (CNN). Specifically, a plain ResNet architecture is adopted, and the 3-D separable convolution is utilized to better capture the spatial–spectral features. The loss function is tailored based on the degradation model, enforcing the network output obeying the degradation process. Meanwhile, a network parameter initialization strategy is designed to further mine the abundant fine information provided by existing 10-m bands. The network parameters are inferred solely from the observed S2 image in a self-supervised manner without involving any extra training data. Finally, the network outputs the super-resolution result. On the one hand, our method could utilize the high model capacity of CNNs and work without large amounts of training data required by many deep learning techniques. On the other hand, the degradation process is fully considered, and each module in our work is interpretable. Numerical results on synthetic and real data illustrate that our method could outperform compared state-of-the-art methods.
               
Click one of the above tabs to view related content.