Recently, deep convolutional neural networks (CNNs) have made important progress in cloud detection with powerful representation learning capability and yield significant performance. However, most existing CNN-based cloud detection methods still… Click to show full abstract
Recently, deep convolutional neural networks (CNNs) have made important progress in cloud detection with powerful representation learning capability and yield significant performance. However, most existing CNN-based cloud detection methods still face serious challenges because of the variable geometry of clouds and the complexity of underlying surfaces. It is attributed that they only use the fixed grid to extract contextual information, which lacks internal mechanisms to handle the geometric transformations of clouds. To tackle this problem, we propose a deformable convolutional cloud detection network with an encoder-decoder architecture, named DCNet, which can enhance the adaptability of a model to cloud variations. Specifically, we introduce deformable convolution blocks at the encoder to capture saliency spatial contexts adaptively based on the morphological characteristics of clouds and generate high-level semantic representations. After this, we incorporate skip-connection mechanisms into the decoder that integrate low-level spatial contexts as guidance to recover high-level semantic pixel localization and export precise cloud-detection results. Extensive experiments on the GF-1 wide field-of-view (WFV) Satellite Imagery demonstrate that DCNet outperforms several state-of-the-art methods. A public reference implementation of our proposed model in PyTorch is available at https://github.com/NiAn-creator/deformableCloudDetection.git.
               
Click one of the above tabs to view related content.