Image super-resolution (SR) plays an important role in many areas as it promises to generate high-resolution (HR) images without upgrading image sensors. Many existing SR methods require a large external… Click to show full abstract
Image super-resolution (SR) plays an important role in many areas as it promises to generate high-resolution (HR) images without upgrading image sensors. Many existing SR methods require a large external training set, which would consume a lot of memory. In addition, these methods are usually time-consuming when training model. Moreover, these methods need to retrain model once the magnification factor changes. To overcome these problems, we propose a method, which does not need an external training set by using self-similarity. Firstly, we rotate original low-resolution (LR) image with different angles to expand the training set. Second, multi-scale Difference of Gaussian filters are exploited to obtain multi-view feature maps. Multi-view feature maps could provide an accurate representation of images. Then, feature maps are divided into patches in parallel to build an internal training set. Finally, nonlocal means is applied to each LR patch from original LR image to infer HR patches. In order to accelerate the proposed method by exploiting the computation power of GPU, we implement the proposed method with compute unified device architecture (CUDA). Experimental results validate that the proposed method performs best among the compared methods in both terms of visual perception and objective quantitation. Moreover, the proposed method gets a remarkable speedup after implemented with CUDA.
               
Click one of the above tabs to view related content.