Deep learning-based synthetic aperture radar (SAR) image change detection has recently achieved remarkable success due to its great potential for extracting abstract features. However, the existing methods still have room… Click to show full abstract
Deep learning-based synthetic aperture radar (SAR) image change detection has recently achieved remarkable success due to its great potential for extracting abstract features. However, the existing methods still have room for improvement in dealing with the speckle of SAR images. In this letter, a deep spatial–temporal gray-level co-occurrence aware convolutional neural network (STGCNet) is proposed, which can effectively mine the spatial–temporal information of the bitemporal images and obtain the speckle-robust results by introducing the 3-D gray-level co-occurrence matrix (3-D-GLCM) as auxiliary feature. Specifically, representative features are extracted from original image pairs and their corresponding 3-D-GLCM through two-stream network, followed by an adaptive fusion module to balance the contribution of each branch. Then, the final binary change detection results are obtained by a fully connected layer. The training process relies on reliable labels generated by unsupervised models rather than manually annotated data, and therefore, the proposed STGCNet is practical in reality. Experiments on synthesized and real SAR data sets demonstrate the robustness and competitiveness of the proposed method compared with the state-of-the-art algorithms.
               
Click one of the above tabs to view related content.