Abstract. An adaptive joint sparsity model (JSM) is presented for multimodal image fusion. As a multisignal modeling technique, JSM, which is derived from distributed compressed sensing, has been successfully employed… Click to show full abstract
Abstract. An adaptive joint sparsity model (JSM) is presented for multimodal image fusion. As a multisignal modeling technique, JSM, which is derived from distributed compressed sensing, has been successfully employed in multimodal image fusion. In traditional JSM-based fusion, a single dictionary learned by K-singular value decomposition (SVD) has higher coherence yet may result in potential visual confusion and misleading. In the proposed model, we first learn a plurality of subdictionaries and use a supervised classification approach based on gradient information. Then, one of the learned subdictionaries is adaptively applied to JSM to obtain the common and innovative sparse coefficients.. Finally, the fused image is reconstructed by the fused sparse coefficients and the adaptive dictionary. Infrared-visible images and medical images were selected to test the proposed approach. The results were compared with those of traditional methods, such as the multiscale transform-based methods, JSM-based method, and adaptive sparse representation (ASR) model-based method. Experimental results on multimodal images demonstrate that the proposed fusion method can obtain better performance than the conventional JSM-based method and ASR-based method in terms of both visual quality and objective assessment.
               
Click one of the above tabs to view related content.