With the rapid development of remote sensing sensor technology, the number of remote sensing images (RSIs) has exploded. How to effectively retrieve and manage this massive data have become an… Click to show full abstract
With the rapid development of remote sensing sensor technology, the number of remote sensing images (RSIs) has exploded. How to effectively retrieve and manage this massive data have become an urgent problem. At present, content-based image retrieval (CBIR) methods have become a mainstream method due to their excellent performance. However, most of the existing retrieval methods only consider the global features of images, which lacks the ability to discriminate images with the same semantic information but different visual representations. To alleviate this issue, supervised contrastive learning based on the fusion of global and local features method is proposed in this article, named SCFR. First, a fusion module is designed to combine global and local features to enhance the ability of image expression. Second, supervised contrastive learning is introduced into the retrieval task to effectively improve the feature distribution, so that the positive sample pairs are close to each other, and the negative sample pairs are far away from each other in the feature space. Furthermore, to make the distribution of features of the same class more compact, the center contrastive loss is added to the constraints and combines the class centers that change iteratively with the network. Experimental results on three RSI datasets show that our proposed method has a more effective retrieval performance than the state-of-the-art methods. The code and models are available at https://github.com/xdplay17/SCFR.
               
Click one of the above tabs to view related content.