Traditional and deep learning-based fusion methods generated the intermediate decision map to obtain the fusion image through a series of postprocessing procedures. However, the fusion results generated by these methods… Click to show full abstract
Traditional and deep learning-based fusion methods generated the intermediate decision map to obtain the fusion image through a series of postprocessing procedures. However, the fusion results generated by these methods are easy to lose some source image details or results in artifacts. Inspired by the image reconstruction techniques based on deep learning, we propose a multifocus image fusion network framework without any postprocessing to solve these problems in the end-to-end and supervised learning ways. To sufficiently train the fusion model, we have generated a large-scale multifocus image data set with ground-truth fusion images. What is more, to obtain a more informative fusion image, we further designed a novel fusion strategy based on unity fusion attention, which is composed of a channel attention module and a spatial attention module. Specifically, the proposed fusion approach mainly comprises three key components: feature extraction, feature fusion, and image reconstruction. We first utilize seven convolutional blocks to extract the image features from source images. Then, the extracted convolutional features are fused by the proposed fusion strategy in the feature fusion layer. Finally, the fused image features are reconstructed by four convolutional blocks. Experimental results demonstrate that the proposed approach for multifocus image fusion achieves remarkable fusion performance and superior time efficiency compared to 19 state-of-the-art fusion methods.
               
Click one of the above tabs to view related content.