Abstract Benefiting from the development of convolutional neural networks, salient object detection has yielded a qualitative leap in performance. In recent years, most of deep learning based methods utilize multi-level… Click to show full abstract
Abstract Benefiting from the development of convolutional neural networks, salient object detection has yielded a qualitative leap in performance. In recent years, most of deep learning based methods utilize multi-level features and obtain inferred saliency map in a coarse-to-fine manner. However, how to learn and represent powerful features is still a challenge. In this paper, we propose a novel FCN-like approach named attentive feature integration network (AFINet) for pixel-wise salient object detection, which results saliency maps with explicit boundary and uniform highlighted regions. Specifically, it adopts feature enhancement module (FEM) to extract rich and enhanced features from backbone net. A feature discrimination module (FDM) is designed to utilize the predicted saliency map generated by deeper layer to help shallower layer learn useful and discriminative attentive features. Moreover, we introduce the saliency information from deeper layer to the shallower one in saliency prediction module (SPM), which helps shallow side outputs accurately locate salient regions. In addition, we design a saliency fusion module (SFM) to integrate different side outputs for utilizing multi-level features. Finally, a fully connected CRF scheme can be optimally incorporated for obtaining saliency results with a higher accuracy. Both qualitative and quantitative comparisons and evaluations conducted on five publicly benchmark datasets demonstrate that our proposed approach compares favorably against 17 state-of-the-art approaches.
               
Click one of the above tabs to view related content.