Abstract Research on super-resolution has achieved great success on synthetic data with deep convolutional neural networks. Some recent works tend to apply super-resolution to practical scenarios. Learning an accurate and… Click to show full abstract
Abstract Research on super-resolution has achieved great success on synthetic data with deep convolutional neural networks. Some recent works tend to apply super-resolution to practical scenarios. Learning an accurate and flexible model for super-resolution of arbitrary scale factor is important for realistic applications, while most existing works only focus on integer scale factor. In this work, we present a residual scale attention network for super-resolution of arbitrary scale factor. Specifically, we design a scale attention module to learn discriminative features of low-resolution images by introducing the scale factor as prior knowledge. Then, we utilize quadratic polynomial of the coordinate information and scale factor to predict pixel-wise reconstruction kernels and achieve super-resolution of arbitrary scale factor. Besides, we use the predicted reconstruction kernels in image domain to interpolate low-resolution image and obtain coarse high-resolution image first, then make our main network learn high-frequency residual image from feature domain. Extensive experiments on both synthetic and real data show that the proposed method outperforms state-of-the-art super-resolution methods of arbitrary scale factor in terms of both objective metrics and subjective visual quality.
               
Click one of the above tabs to view related content.