The compressive sensing (CS)-based tomographic SAR (TomoSAR) 3-D imaging method has the shortcoming of low efficiency, mainly represented in two aspects: first, the CS solver requires iterative calculation and hence… Click to show full abstract
The compressive sensing (CS)-based tomographic SAR (TomoSAR) 3-D imaging method has the shortcoming of low efficiency, mainly represented in two aspects: first, the CS solver requires iterative calculation and hence is computationally expensive; second, the CS solver needs hyperparameters’ selection, which commonly requires cost-inefficient try-and-error attempts. Recently, the iterative CS solver is suggested to be replaced by a deep learning network for a tremendous processing speed improvement. However, the existing deep-learning-based TomoSAR imaging algorithms suffer from the problem of model inadaptability, i.e., being inadaptive to the observation model and the signal energy model and hence is low accuracy. This article proposes a new model-adaptive network (MAda-Net) to implement deep-learning-based TomoSAR 3-D imaging with a much improved processing accuracy. First, a new adaptive model-solving (AMS) module is introduced to solve the problem of the observation model inconsistency between the real spatially varying one and the approximately fixed one used by the network. Second, a new adaptive threshold-activation (ATC) module is introduced to solve the problem of signal energy model inconsistency between the real backscattered echo and the simulated echo for network training. The effectiveness of the proposed method has been verified by the computer simulations and the real unmanned aerial vehicle (UAV) SAR experiments.
               
Click one of the above tabs to view related content.