Remote sensing image analysis has drawn more attentions in the field of computer vision. At present, the methods commonly used in remote sensing image analysis are mainly at a low… Click to show full abstract
Remote sensing image analysis has drawn more attentions in the field of computer vision. At present, the methods commonly used in remote sensing image analysis are mainly at a low level semantically. The scene graph is an abstraction of objects and their relationships, which is a high-level image understanding task. To fully comprehend the meanings of remote sensing images, in this article, we propose a novel segmentation-based model to generate remote sensing image scene graphs (SRSG). In the SRSG model, a more complete and accurate scene graph is generated with the segmentation results as inputs, while the shapes of objects are reasonably coded. The morphological features of object pairs are embedded together by different branches of the SRSG model and then mapped to semantic space to predicate their relationships. Furthermore, a new dataset for scene graph generation of remote sensing images, namely, segmentation results to scene graphs (S2SG), is constructed based on pixel-level segmentation results. Experimental results demonstrate that the performance of the SRSG model is far superior to the previous methods in the task of generating remote sensing image scene graphs. The proposed SRSG model opens up new possibilities for remote sensing image analysis at a high level. Moreover, the S2SG dataset further allows for the evaluation of different approaches and is provided for the benefit of the research community.
               
Click one of the above tabs to view related content.