To realize the multi-focus image fusion task, an end-to-end deep convolutional neural network (DCNN) model that produces the final fused image directly from the source images is presented in this… Click to show full abstract
To realize the multi-focus image fusion task, an end-to-end deep convolutional neural network (DCNN) model that produces the final fused image directly from the source images is presented in this paper. In order to promote the fusion precision, the innovative multi-focus fusion DCNN introduces a multi-scale feature extraction (MFE) unit to collect more complementary features from different spatial scales and fuse them to excavate more spatial information. Moreover, a visual attention unit is designed to help the network locate the focused region more accurately and pick more useful features for perfectly splicing the details in the fusion process. Experimental results illustrate that the proposed method is superior to several existing multi-focus image fusion methods in both of the subjective visual effects and objective quality metrics.
               
Click one of the above tabs to view related content.