At present, convolution neural networks have achieved good performance in remote sensing image change detection. However, due to the locality of convolution, these methods are difficult to capture the global… Click to show full abstract
At present, convolution neural networks have achieved good performance in remote sensing image change detection. However, due to the locality of convolution, these methods are difficult to capture the global context relationships among different-level features. To alleviate this issue, we propose a context and difference enhancement network (CDENet) for change detection, which can strongly model global context relationships and enhance the change difference. Specifically, our backbone is the dual TransUNet, which is based on U-Net and equipped with transformer block in the encoder. The dual TransUNet is used to extract bitemporal features. Then, the features are encoded as the input sequence, which is conducive to modeling the global context. Moreover, we design the content difference enhancement module to process the dual features of each layer in the encoder. The designed module can increase the spatial attention of difference regions to enhance the change difference features. In the decoder, we adopt a simple cross-layer feature fusion to combine the upsampled features with the high-resolution features, which is used to generate more accurate results. Finally, we adopt a novel loss to supervise the accuracy of results in regions and pixels. The experiments on two public change detection datasets demonstrate that our CDENet has strong competitiveness and performs better than the state-of-the-art methods.
               
Click one of the above tabs to view related content.