With the development of deep convolution neural networks (CNNs), contour detection has made great progress. Some contour detectors based on CNNs have better performance than human beings on standard benchmarks.… Click to show full abstract
With the development of deep convolution neural networks (CNNs), contour detection has made great progress. Some contour detectors based on CNNs have better performance than human beings on standard benchmarks. However, it is easier for CNNs to learn the similar features of adjacent pixels, and the number of background pixels and edge pixels in the input training sample is highly imbalanced. Therefore, the prediction edge by the edge detector based on CNNs is thick and requires post-processing to obtain crisp edges. Accordingly, we introduce a novel parallel attention model and a novel loss function that combines cross-entropy and dice loss through the use of adaptive coefficients, and propose a novel bidirectional multiscale refinement network (BMRN) that stacks multiple refinement modules in order to achieve richer feature representation. The experimental results show that our method has better performance than the state-of-the-art on BSDS500 (ODS F-score of 0.828), NYUDv2 depth datasets (ODS F-score of 0.778) and Multi-Cue dataset (ODS F-score 0.905(0.002)).
               
Click one of the above tabs to view related content.