Image super-resolution (SR) task aims to recover high-resolution (HR) images from degraded low-resolution (LR) images, which has achieved great progress due to the recent advances of deep neural networks. Due… Click to show full abstract
Image super-resolution (SR) task aims to recover high-resolution (HR) images from degraded low-resolution (LR) images, which has achieved great progress due to the recent advances of deep neural networks. Due to severe information loss of the LR images, it is more challenging to reconstruct high quality HR images at large scale factors, i. e., higher than $4\times $ . Traditional reference image based SR methods usually perform patch matching to locate detailed texture from HR reference images which could provide fine details from similar image contents. But it suffers from difficulties in achieving good matching in the largely downscaled image space or feature space due to the ill-posed nature between LR and HR mapping. In this paper, we tackle this problem by exploiting fine details contained in reference HR images. Inspired by vector quantization (VQ), we propose a simple yet effective auto-encoder convolutional neural network (CNN) module to learn discrete representations of images. Furthermore, we propose to progressively learn pairs of cross-scale discrete feature representations using paired LR and HR reference images. The coarser scale of the discrete representation is responsible for encoding the global image structure while the paired finer scale of the discrete representation takes charge of capturing missing details in the finer image scale. During inference, continuous features of the test LR image are used as queries to retrieve finer scale discrete representations (value) by searching the nearest coarser scale discrete representations (key). Then, the queries and retrieved values are combined to progressively recover the HR image. Experimental results indicate that when compared with the state-of-the-art image SR models, the proposed method can achieve advanced performance in terms of both objective quality and subjective quality. The code will be available on URL: https://github.com/sunwj/refsr.
               
Click one of the above tabs to view related content.