Cross-sensor remote-sensing data have a significant impact on degrading the performance of traditional land cover classification (LCC) models. This occurs due to different probability distributions of the collected data from… Click to show full abstract
Cross-sensor remote-sensing data have a significant impact on degrading the performance of traditional land cover classification (LCC) models. This occurs due to different probability distributions of the collected data from different satellites (having diverse image resolutions and different geographical locations). To resolve this, a cross-sensor domain adaptation (DA) strategy is investigated by considering two source $\rightarrow $ target scenarios using hyperspectral and aerial image datasets. At the onset, a feature extraction (FE) along with a “stacking of sample” (whenever required) strategy is proposed to balance the cross-sensor data in terms of feature dimensions and the available number of samples. Thereafter, a standard deviation (SD)-based active learning (AL) technique is investigated by exploiting the labeled source images to get the “most-informative” target samples. Finally, the labeled source and “most-informative” target samples are merged to train a classifier which is then used to predict the land cover classes under a multi-sensor framework. Experimental results are found to be promising for the proposed scheme to handle the DA problem under a cross-sensor environment.
               
Click one of the above tabs to view related content.