Simultaneously fusing hyperspectral (HS), multispectral (MS), and panchromatic (PAN) images brings a new paradigm to generate a high-resolution HS (HRHS) image. In this study, we propose an interpretable model-driven deep… Click to show full abstract
Simultaneously fusing hyperspectral (HS), multispectral (MS), and panchromatic (PAN) images brings a new paradigm to generate a high-resolution HS (HRHS) image. In this study, we propose an interpretable model-driven deep network for HS, MS, and PAN image fusion, called HMPNet. We first propose a new fusion model that utilizes a deep before describing the complicated relationship between the HRHS and PAN images owing to their large resolution difference. Consequently, the difficulty of traditional model-based approaches in designing suitable hand-crafted priors can be alleviated because this deep prior is learned from data. We further solve the optimization problem of this fusion model based on the proximal gradient descent (PGD) algorithm, achieved by a series of iterative steps. By unrolling these iterative steps into several network modules, we finally obtain the HMPNet. Therefore, all parameters besides the deep prior are learned in the deep network, simplifying the selection of optimal parameters in the fusion and achieving a favorable equilibrium between the spatial and spectral qualities. Meanwhile, all modules contained in the HMPNet have explainable physical meanings, which can improve its generalization capability. In the experiment, we exhibit the advantages of the HMPNet over other state-of-the-art methods from the aspects of visual comparison and quantitative analysis, where a series of simulated as well as real datasets are utilized for validation.
               
Click one of the above tabs to view related content.