There is a large amount of out-of-distribution (OOD) data in remote sensing, which hinders high-accuracy segmentation models under the assumption of independent identical distribution (i.i.d.) from stable and reliable performance… Click to show full abstract
There is a large amount of out-of-distribution (OOD) data in remote sensing, which hinders high-accuracy segmentation models under the assumption of independent identical distribution (i.i.d.) from stable and reliable performance in real-world remote sensing applications. And domain adaptation (DA) is presented to seamlessly extend classifiers to the label-scarce target domain in the presence of the label-sufficient source domain with different data distributions. However, given that the domain shift, i.e., the distribution difference between the two domains, is more serious in remote sensing images, the current DA methods for image segmentation in computer vision (CV) typically perform unsatisfactorily in remote sensing, even suffering from the negative domain alignment. To this end, this letter proposes the self-training adversarial DA (STADA) method for remote sensing image segmentation, which not only performs adversarial learning to extract domain-invariant features but also implements self-training using pseudo-labels in the target domain denoised by the conditional adversarial loss for classifier adaptation. The International Society for Photogrammetry and Remote Sensing (ISPRS) and Wuhan University (WHU) datasets are employed to conduct extensive experiments to investigate the effectiveness of STADA and the specific effect of each DA component. And the experimental results demonstrate that STADA outperforms other state-of-the-art DA methods in the remote sensing image segmentation task.
               
Click one of the above tabs to view related content.