This paper presents a transformer and CNN hybrid deep neural network for semantic segmentation of very-high-resolution remote sensing imagery. The model follows an encoder-decoder structure. The encoder module uses a… Click to show full abstract
This paper presents a transformer and CNN hybrid deep neural network for semantic segmentation of very-high-resolution remote sensing imagery. The model follows an encoder-decoder structure. The encoder module uses a new universal backbone swin transformer to extract features to achieve better long-range spatial dependencies modeling. The decoder module draws on some effective blocks and successful strategies of CNN-based models in remote sensing image segmentation. In the middle of the framework, an atrous spatial pyramid pooling block based on depth-wise separable convolution (SASPP) is applied to obtain multi-scale context. An U-shaped decoder is used to gradually restore the size of the feature maps. Three skip connections are built between the encoder and decoder feature maps of the same size to maintain the transmission of local details and enhance the communication of multi-scale features. A squeeze-and-excitation (SE) channel attention block is added before segmentation for feature augmentation. An auxiliary boundary detection branch is combined to provide edge constraints for semantic segmentation. Extensive ablation experiments were conducted on the ISPRS Vaihingen and Potsdam benchmarks to test the effectiveness of multiple components of the network. At the same time, the proposed method is compared with the current state-of-the-art methods on the two benchmarks. The proposed hybrid network achieved the second highest overall accuracy (OA) on both the Potsdam and Vaihingen benchmarks. Code and models are available at https://github.com/zq7734509/mmsegmentation-multi-layer.
               
Click one of the above tabs to view related content.