Inverse synthetic aperture radar (ISAR) is an effective detection method for targets. However, for the maneuvering targets, the Doppler frequency induced by an arbitrary scatterer on the target is time-varying,… Click to show full abstract
Inverse synthetic aperture radar (ISAR) is an effective detection method for targets. However, for the maneuvering targets, the Doppler frequency induced by an arbitrary scatterer on the target is time-varying, which will cause defocus on ISAR images and bring difficulties for the further recognition process. It is hard for traditional methods to well refocus all positions on the target well. In recent years, generative adversarial networks (GANs) achieve great success in image translation. However, the current refocusing models ignore the information of high-order terms containing in the relationship between real and imaginary parts of the data. To this end, an end-to-end refocusing network, named complex-valued pix2pixHD (CVPHD), is proposed to learn the mapping from defocus to focus, which utilizes complex-valued (CV) ISAR images as an input. A CV instance normalization layer is applied to mine the deep relationship between the complex parts by calculating the covariance of them and accelerate the training. Subsequently, an innovative adaptively weighted loss function is put forward to improve the overall refocusing effect. Finally, the proposed CVPHD is tested with the simulated and real dataset, and both can get well-refocused results. The results of comparative experiments show that the refocusing error can be reduced if extending the pix2pixHD network to the CV domain and the performance of CVPHD surpasses other autofocus methods in refocusing effects. The code and dataset have been available online (https://github.com/yhx-hit/CVPHD).
               
Click one of the above tabs to view related content.