Two-dimensional phase unwrapping (PU) is a classical ill-posed problem in synthetic aperture radar interferometry (InSAR). The traditional algorithmic model-based 2-D PU methods are limited by the Itoh condition, which is… Click to show full abstract
Two-dimensional phase unwrapping (PU) is a classical ill-posed problem in synthetic aperture radar interferometry (InSAR). The traditional algorithmic model-based 2-D PU methods are limited by the Itoh condition, which is from the PU researchers’ experience and has critical challenges under strong phase noises or violent phase changes. Recently, advanced learning-based 2-D PU methods could break through the limitation of the Itoh condition owing to their data-driven frameworks, offering promising results in terms of both the speed and accuracy. The one-step learning-based PU method, as one of the representatives, retrieves the unwrapped phase directly from the wrapped phase through regression. However, the main disadvantage of one-step learning-based PU is that it usually blurs the output unwrapped phase due to its $L_{2}$ loss, that is, it cannot guarantee the congruency between the rewrapped interferometric fringes of the PU solution and the input interferogram. To solve this problem, we propose a one-step 2-D PU method based on the conditional generative adversarial network (referred to as PU-GAN), which treats 2-D PU as an image-to-image translation problem. The generator in PU-GAN can be trained to generate the unwrapped phase through minimizing a $L_{1}$ -norm loss based on a U-Net architecture, while simultaneously the corresponding discriminator can learn an adversarial loss by a structure of Patch-GAN that tries to classify if the output unwrapped phase image is real or fake. Both a theoretical analysis and the experimental results show that the proposed method outperforms the representative algorithmic model-based and learning-based 2-D PU methods.
               
Click one of the above tabs to view related content.