Unlike optical sensors, synthetic aperture radar (SAR) sensors acquire images of the Earth’s surface with all-weather and all-time capabilities, which is vital in a situation such as a disaster assessment.… Click to show full abstract
Unlike optical sensors, synthetic aperture radar (SAR) sensors acquire images of the Earth’s surface with all-weather and all-time capabilities, which is vital in a situation such as a disaster assessment. However, SAR sensors do not offer as rich visual information as optical sensors. SAR-to-optical image-to-image translation generates optical images from SAR images to benefit from what both imaging modalities have to offer. It also enables multisensor image analysis of the same scene for applications such as heterogeneous change detection. Various architectures of generative adversarial networks (GANs) have achieved remarkable image-to-image translation results in different domains. Still, their performances in SAR-to-optical image translation have not been analyzed in the remote-sensing domain. This letter compares and analyzes the state-of-the-art GAN-based translation methods with open-source implementations for SAR-to-optical image translation. The results show that GAN-based SAR-to-optical image translation methods achieve satisfactory results. However, their performances depend on the structural complexity of the observed scene and the spatial resolution of the data. We also introduce a new dataset with a higher resolution than the existing SAR-to-optical image datasets and release implementations of GAN-based methods considered in this letter to support the reproducible research in remote sensing.
               
Click one of the above tabs to view related content.