It has long been recognized that synthetic aperture radar (SAR) images suffer in many applications from the speckle noise. Video SAR has a high frame rate of imaging and contains… Click to show full abstract
It has long been recognized that synthetic aperture radar (SAR) images suffer in many applications from the speckle noise. Video SAR has a high frame rate of imaging and contains redundant information among frames. The temporal redundancy in video SAR images has been found useful for suppressing the speckle noise. However, the motion and the local differences between frames make it difficult to employ the temporal redundancy to suppress speckle noise. This article presents a video SAR image despeckling framework based on a new unsupervised training strategy referred to as DualNoise2Noise. This developed framework consists of a registration network and a denoising network. The registration network first compensates for the motion between two adjacent frames in video SAR in real time. After image registration, two adjacent frames with random speckle noise can be considered as the observations of the same region with local differences. The denoising network adopts the DualNoise2Noise training strategy to suppress speckle noise by using the temporal redundancy and to remove the negative impact caused by the local differences. The proposed approach has been used to process the real video SAR data, and the experimental results are convincing.
               
Click one of the above tabs to view related content.