We formulate the registration as a function that maps the input reference and sensed images to eight displacement parameters between prescribed matching points, as opposed to the usual techniques (feature… Click to show full abstract
We formulate the registration as a function that maps the input reference and sensed images to eight displacement parameters between prescribed matching points, as opposed to the usual techniques (feature extraction–description–matching–geometric restrictions). The projection transformation matrix (PTM) is then computed in the neural network and used to warp the sensed image, uniting all matching tasks under one framework. In this article, we offer a multimodal image fusion network with self-attention to merge the feature representation of the reference and sensed images. The integration information is then utilized to regress the prescribed points’ displacement parameters to get PTM between the reference and sensed images. Finally, PTM is supplied into the spatial transformation network (STN), which warps the sensed image to the same coordinates as the reference image, achieving end-to-end matching. In addition, a dual-supervised loss function is proposed to optimize the network from both the prescribed point displacement and the overall pixel matching perspectives. The effectiveness of our method is validated by qualitative and quantitative experimental results on multimodal remote sensing image matching tasks. The code is available at: https://github.com/liliangzhi110/E2EIR.
               
Click one of the above tabs to view related content.