LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

DenseU-Net-Based Semantic Segmentation of Small Objects in Urban Remote Sensing Images

Photo from wikipedia

Class imbalance is a serious problem that plagues the semantic segmentation task in urban remote sensing images. Since large object classes dominate the segmentation task, small object classes are usually… Click to show full abstract

Class imbalance is a serious problem that plagues the semantic segmentation task in urban remote sensing images. Since large object classes dominate the segmentation task, small object classes are usually suppressed, so the solutions based on optimizing the overall accuracy are often unsatisfactory. In the light of the class imbalance of the semantic segmentation in urban remote sensing images, we developed the concept of the Down-sampling Block (DownBlock) for obtaining context information and the Up-sampling Block (UpBlock) for restoring the original resolution. We proposed an end-to-end deep convolutional neural network (DenseU-Net) architecture for pixel-wise urban remote sensing image segmentation. The main idea of the DenseU-Net is to connect convolutional neural network features through cascade operations and use its symmetrical structure to fuse the detail features in shallow layers and the abstract semantic features in deep layers. A focal loss function weighted by the median frequency balancing $(MFB\_{}Focal_{loss}$ ) is proposed; the accuracy of the small object classes and the overall accuracy are improved effectively with our approach. Our experiments were based on the 2016 ISPRS Vaihingen 2D semantic labeling dataset and demonstrated the following outcomes. In the case where boundary pixels were considered (GT), $MFB\_{}Focal_{loss}$ achieved a good overall segmentation performance using the same U-Net model, and the F1-score of the small object class “car” was improved by 9.28% compared with the cross-entropy loss function. Using the same $MFB\_{}Focal_{loss}$ loss function, the overall accuracy of the DenseU-Net was better than that of U-Net, where the F1-score of the “car” class was 6.71% higher. Finally, without any post-processing, the DenseU-Net+MFB_Focalloss achieved the overall accuracy of 85.63%, and the F1-score of the “car” class was 83.23%, which is superior to HSN+OI+WBP both numerically and visually.

Keywords: remote sensing; urban remote; loss; denseu net; inline formula; segmentation

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.