Defocus blur detection aiming at distinguishing out-of-focus blur and sharpness has attracted considerable attention in computer vision. The present blur detectors suffer from scale ambiguity, which results in blur boundaries… Click to show full abstract
Defocus blur detection aiming at distinguishing out-of-focus blur and sharpness has attracted considerable attention in computer vision. The present blur detectors suffer from scale ambiguity, which results in blur boundaries and low accuracy in blur detection. In this paper, we propose a defocus blur detector to address these problems by integrating multiscale deep features with Conv-LSTM. There are two strategies to extract multiscale features. The first one is extracting features from images with different sizes. The second one is extracting features from multiple convolutional layers by a single image. Our method employs both strategies, i.e., extracting multiscale convolutional features from same image with different sizes. The features extracted from different sized images at the corresponding convolutional layers are fused to generate more robust representations. We use Conv-LSTMs to integrate the fused features gradually from top-to-bottom layers, and to generate multiscale blur estimations. The experiments on CUHK and DUT datasets demonstrate that our method is superior to the state-of-the-art blur detectors.
               
Click one of the above tabs to view related content.