Pairing image patches is to decide whether two image patches belong to the same scene but taken from different imaging conditions. It is a key procedure in the applications of… Click to show full abstract
Pairing image patches is to decide whether two image patches belong to the same scene but taken from different imaging conditions. It is a key procedure in the applications of unmanned aerial vehicle (UAV) video images. The challenges in pairing UAV image patches derive from the complex imaging conditions on UAV platforms such as jitter, frequent undefined motion, viewpoint changes, and illumination changes. Available popular methods usually follow the flowchart: preprocess images at first, then extract hand-crafted features, and finally match the extracted features through evaluating an independently predefined similarity metric. These methods could only handle part of negative factors from the complex imaging conditions and thus cannot effectively handle the challenges in pairing UAV image patches. This study aims to handle the challenges through automatically and simultaneously learning more representative features and accurate metric. Especially, this study proposes a deep learning method to jointly learn the feature representations and similarity metric over the training samples obtained from various imaging conditions. The model structure of the proposed pairing system consists of three parts: two stream convolutional neural networks (CNNs), one similarity metric layer and one softmax layer. They are jointly trained through the usual back propagation algorithm. Moreover, to further improve the performance, this study develops a transfer learning strategy for the proposed deep model. Two new training datasets from satellite scenes and UAV scenes, respectively, are built to evaluate the proposed pairing system, and the experimental results show that our method outperforms the most recent approaches in pairing UAV video image patches.
               
Click one of the above tabs to view related content.