Abstract Recent years have witnessed great success of Single Image Super-Resolution (SISR) with convolutional neural network (CNN) based models. Most existing Super-Resolution (SR) networks use bicubic upscaled images as input… Click to show full abstract
Abstract Recent years have witnessed great success of Single Image Super-Resolution (SISR) with convolutional neural network (CNN) based models. Most existing Super-Resolution (SR) networks use bicubic upscaled images as input or directly use low-resolution images as input and do transposed convolution or sub-pixel convolution only in the reconstruction stage which do not use the hierarchical features across the network for final reconstruction. In this paper, we propose a novel stacked U-shape networks with channel-wise attention (SUSR) for SISR. In general, the proposed network consists of four parts, which are shallow feature extraction block, stacked U-shape blocks which produce high-resolution features, residual channel-wise attention blocks and reconstruction block respectively. The hierarchical high-resolution features produced by U-shape blocks have the same size with the final super-resolved image, thus different to existing methods we do upsampling operator in U-shape blocks. In order to fully exploit the different hierarchical features, we propose residual attention block (RAB) to perform feature refinement which explicitly model relationships between channels. Experiments on five public datasets show that our method can achieve much higher Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) scores than the state-of-the-art methods.
               
Click one of the above tabs to view related content.