Co-saliency detection focuses on detecting common and salient objects among a group of images. With the application of deep learning in co-saliency detection, more accurate and more effective models are… Click to show full abstract
Co-saliency detection focuses on detecting common and salient objects among a group of images. With the application of deep learning in co-saliency detection, more accurate and more effective models are proposed in an end-to-end manner. However, two major drawbacks in these models hinder the further performance improvement of co-saliency detection: 1) the static manner-based inference, and 2) the constant quantity of input images. To address these limitations, we present a novel Adaptive Group-wise Consistency Network (AGCNet) with the ability of content-adaptive adjustment for a given image group with random quantity of images. In AGCNet, we first introduce intra-saliency priors generated from any off-the-shelf salient object detection model. Then, an Adaptive Group-wise Consistency (AGC) module is proposed to capture group consistency for each individual image, and is applied on three-scale features to capture the group consistency from different perspectives. This module is composed of two key components, where the content-adaptive group consistency block breaks the above limitations to adaptively capture the global group consistency with the assistance of intra-saliency priors and the ranking-based fusion block combines the consistency with individual attributes of each image feature to generate discriminative group consistency feature for each image. Following AGC modules, a specially designed Aggregated Decoder aggregates the three-scale group consistency features to adapt to co-salient objects with diverse scales for preliminary detection. Finally, we incorporate two normal decoders to progressively refine the preliminary detection and generate the final co-saliency maps. Extensive experiments on four benchmark datasets demonstrate that our AGCNet achieves competitive performance as compared with 19 state-of-the-art models, and the proposed modules experimentally show substantial practical merits.
               
Click one of the above tabs to view related content.