Due to the all-weather and all-time imaging capability of synthetic aperture radar (SAR), SAR remote sensing analysis has attracted much attention recently. However, compared with the optical images, SAR images… Click to show full abstract
Due to the all-weather and all-time imaging capability of synthetic aperture radar (SAR), SAR remote sensing analysis has attracted much attention recently. However, compared with the optical images, SAR images are more difficult to be interpreted. If an SAR image could be translated into its corresponding optical image, then the generated optical image would be helpful for assisting the interpretation. Addressing this issue, we investigate how to translate SAR images into optical ones in this work and propose a parallel generative adversarial model for SAR-to-optical image translation, called parallel generative adversarial network (Parallel-GAN), consisting of a backbone image translation subnetwork and an adjoint optical image reconstruction subnetwork. Under the proposed model, the backbone image translation subnetwork is designed to translate SAR images into optical ones, and simultaneously some of its intermediate layers are required to output similar latent features to those from the corresponding layers of the adjoint image reconstruction subnetwork. Thanks to the imposed hierarchical latent optical features, the proposed Parallel-GAN could achieve the SAR-to-optical image translation effectively. Extensive experimental results on three public datasets demonstrate that the proposed method outperforms ten state-of-the-art methods for SAR-to-optical image translation.
               
Click one of the above tabs to view related content.