This article proposes a novel approach to regularize the ill-posed and non-linear blind image deconvolution (blind deblurring) using deep generative networks as priors. We employ two separate pretrained generative networks… Click to show full abstract
This article proposes a novel approach to regularize the ill-posed and non-linear blind image deconvolution (blind deblurring) using deep generative networks as priors. We employ two separate pretrained generative networks — given lower-dimensional Gaussian vectors as input, one of the generative models samples from the distribution of sharp images, while the other from that of the blur kernels. To deblur, we find a sharp image and a blur kernel in the range of the respective generators that best explain the blurred image. Our experiments show promising deblurring results on images even under large blurs, and heavy measurement noise. Generative models often manifest a representation error to fit arbitrary samples from the learned distribution. This may be due to multiple factors such as mode collapse, architectural choices, or training caveats. To improve the generalizability of the proposed approach, we present a modification of the proposed scheme that governs the deblurring process under both generative, and classical priors. Training generative models is computationally expensive on larger and more diverse image datasets. Our experiments also show that even an untrained structured (convolutional) network acts as an image prior. We leverage this fact to deblur diverse/complex images for which a trained generative network might not be available.
               
Click one of the above tabs to view related content.