Abstract Most of the previous change detection methods are designed based on the difference of two images. However, directly using intensity or the features to generate difference image may be… Click to show full abstract
Abstract Most of the previous change detection methods are designed based on the difference of two images. However, directly using intensity or the features to generate difference image may be easily affected by the illumination and camera pose variations. In this paper, we show that accurate change detection results can be obtained by fusing the absolute difference of multiscale deep features of the reference and query images. Specifically, we build a change detection network, which computes absolute difference of the multiscale deep features of image pairs and learns adaptive features for change detection. The proposed network is based on off-the-shelf CNNs, whose convolutional layer blocks are used as feature extracting modules to extract multiscale deep features. We devise intra and cross encoding modules. The intra encoding modules are used for learning change related features from extracted features. These features are used for generating absolute difference features (ADFs). By progressively fusing the ADFs from high to low layers with cross encoding modules, we obtain full resolution of change detection result. Extensive experiments on three change detection benchmark datasets validate the superiority and effectiveness of the proposed method over the state-of-the-art change detection methods.
               
Click one of the above tabs to view related content.