Hyperspectral and multispectral image fusion aims to fuse a low-spatial-resolution hyperspectral image (HSI) and a high-spatial-resolution multispectral image to form a high-spatial-resolution HSI. Motivated by the success of model- and… Click to show full abstract
Hyperspectral and multispectral image fusion aims to fuse a low-spatial-resolution hyperspectral image (HSI) and a high-spatial-resolution multispectral image to form a high-spatial-resolution HSI. Motivated by the success of model- and deep learning-based approaches, we propose a novel patch-aware deep fusion approach for HSI by unfolding a subspace-based optimization model, where moderate-sized patches are used in both training and test phases. The goal of this approach is to make full use of the information of patch under subspace representation, restrict the scale and enhance the interpretability of the deep network, thereby improving the fusion. First, a subspace-based fusion model was built with two regularization terms to localize pixels and extract texture. Then, the subspace-based fusion model was solved by the alternating direction method of multipliers algorithm, and the model was divided into one fidelity-based problem and two regularization-based problems. Finally, a structured deep fusion network was proposed by unfolding all steps of the algorithm as network layers. Specifically, the fidelity-based problem was solved by a gradient descent algorithm and implemented by a network. The two regularization-based problems were described by proximal operators and learnt by two u-shaped architectures. Moreover, an aggregation fusion technique was proposed to improve the performance by averaging the fused images in all iterations and aggregating the overlapping patches in the test phase. Experimental results, conducted on both synthetic and real datasets, demonstrated the effectiveness of the proposed approach.
               
Click one of the above tabs to view related content.