Due to the challenges of collecting paired low-resolution (LR) and high-resolution (HR) images in real-world scenarios, most existing deep convolutional neural network (CNN)-based single image super-resolution (SR) models are trained… Click to show full abstract
Due to the challenges of collecting paired low-resolution (LR) and high-resolution (HR) images in real-world scenarios, most existing deep convolutional neural network (CNN)-based single image super-resolution (SR) models are trained with artificially synthesized LR-HR image pairs. However, the domain gap between the synthetic data for model training and the realistic data for testing degrades SR performance significantly, which discourages the application of SR models in practice. One possible solution is to learn from unpaired real-world LR and HR images for their accessibility. Predominant strategies are mainly based on unsupervised domain translation. Despite great advances, there are still noticeable domain gaps between the realistic-like/synthetic-like images generated by unpaired translation and the true realistic/synthetic ones. To address this problem, this letter proposes an effective unsupervised SR framework based on dual synthetic-to-realistic and realistic-to-synthetic translations, namely DTSR. Specifically, to bridge the domain gap between testing and training data, the SR model is optimized using HR images and their realistic-like LR counterparts produced by the synthetic-to-realistic translation. In turn, we propose to narrow the domain gap further via applying the realistic-to-synthetic translation to realistic LR images prior to super-resolving, which also makes the SR model super-resolve simpler examples in testing relative to model training. Moreover, focal frequency and bilateral filtering losses are particularly introduced into DTSR for better details restoration and artifacts suppression. Extensive experiments show that our DTSR outperforms several state-of-the-art models in terms of both quantitative and qualitative comparisons.
               
Click one of the above tabs to view related content.