Image deblurring is a challenging field in computational photography and computer vision. In the deep learning era, deblurring methods boosted with neural networks achieve significant results. However, the existing methods… Click to show full abstract
Image deblurring is a challenging field in computational photography and computer vision. In the deep learning era, deblurring methods boosted with neural networks achieve significant results. However, the existing methods mainly focus on solving specific image deblurring problem, and overlook the origin of the motion blur. In this paper, we revisit how blur occurs, and divide them into three categories, i.e. caused by relative motion between camera and scene, caused by the movement of the object itself and the edges of a blurring image, which may meet discontinuity because of the pixels trajectory sampled outside the image. To address the issues of different blurs in an image, we propose a two-stage neural network for image deblurring named RAID-Net. In order to remove the global blurry region caused by camera movements, we first use a U-shape network to get the coarse deblurred image. Then we leverage an adaptive reasoning module to model the relationship between different blurry regions within one image jointly and remove the other two categories of motion blur. Experiments on two public benchmark datasets demonstrate that our method achieves comparable or better results over the state-of-the-art methods.
               
Click one of the above tabs to view related content.