Pansharpening is an image fusion procedure, which aims to produce a high spatial resolution multispectral (MS) image by combining a low spatial resolution MS image and a high spatial resolution… Click to show full abstract
Pansharpening is an image fusion procedure, which aims to produce a high spatial resolution multispectral (MS) image by combining a low spatial resolution MS image and a high spatial resolution panchromatic image. The most popular and successful paradigm for pansharpening is the framework known as detail injection, while it cannot fully exploit complex and nonlinear complementary features of both images. In this article, we propose a detail-injection-model-inspired deep fusion network for pansharpening (DIM-FuNet). First, by treating pansharpening as a complicated and nonlinear detail learning and injection problem, we establish a unified optimizing detail-injection model with triple detail fidelity terms: 1) a band-dependent spatial detail fidelity term; 2) a local detail fidelity term; and 3) a complicated detail synthesis term. Second, the model is optimized via the iterative gradient descent and unfolded into a deep convolutional neural network. Subsequently, the unrolling network has triple branches, in which a point-wise convolutional subnetwork and a depth-wise convolutional subnetwork are corresponding to the former two detail-constrained terms, and an adaptive weighted reconstruction module with a fusion subnetwork to aggregate details of two branches and to synthesize the final complicated details. Finally, the deep unrolling network is trained in an end-to-end manner. Different from traditional deep fusion networks, the architecture design of DIM-FuNet is guided by the optimizing model and thus promotes better interpretability. Experimental results on reduced and full-resolution demonstrate the effectiveness of the proposed DIM-FuNet which achieves the best performance compared with the state-of-the-art pansharpening method.
               
Click one of the above tabs to view related content.