Not all building labels for training improve the performance of the deep learning model. Some labels can be falsely labeled or too ambiguous to represent their ground truths, resulting in… Click to show full abstract
Not all building labels for training improve the performance of the deep learning model. Some labels can be falsely labeled or too ambiguous to represent their ground truths, resulting in poor performance of the model. For example, building labels in OpenStreetMap (OSM) and Microsoft Building Footprints (MBF) are publicly available training sources that have great potential to train deep models, but directly using those labels for training can limit the model's performance as their labels are incomplete and inaccurate, called noisy labels. This article presents self-filtered learning (SFL) that helps a deep model learn well with noisy labels for building extraction in remote sensing images. SFL iteratively filters out noisy labels during the training process based on loss of samples. Through a multiround manner, SFL makes a deep model learn progressively more on refined samples from which the noisy labels have been removed. Extensive experiments with the simulated noisy map as well as real-world noisy maps, OSM and MBF, showed that SFL can improve the deep model's performance in diverse error types and different noise levels.
               
Click one of the above tabs to view related content.