During the past decades, the invention and employment of multiple sensors have enabled multisensor remote-sensing images (RSIs) image acquisition. In order to effectively use these images for RS scene understanding,… Click to show full abstract
During the past decades, the invention and employment of multiple sensors have enabled multisensor remote-sensing images (RSIs) image acquisition. In order to effectively use these images for RS scene understanding, a scene classification model trained with samples collected from one sensor should generalize well to other sensors. However, it is extremely challenging to directly transfer between different sensors. The key reason is that: If we regard the images obtained by different sensors as the data distributed in different domains, there are large interdomain gaps caused by multiple factors like image scene contents and sensor imaging parameters. To address this, we proposed a general transitive transfer learning (TTL) framework for cross-optical RSIs scene understanding, which can be easily coupled with most of the existing transfer learning methods. The core idea is to gradually minimize the interdomain gap between different sensors by several intermediate domains, which are constructed by a single-factor criterion that only one factor is changed between adjacent domains. Then one challenging cross-optical sensor scene understanding task can be divided into several easier subtasks connected by these intermediate domains (source domain
               
Click one of the above tabs to view related content.