Multifocus image fusion has attracted considerable attention because it can overcome the physical limitations of optical imaging equipment and fuse multiple images with different depths of the field into one… Click to show full abstract
Multifocus image fusion has attracted considerable attention because it can overcome the physical limitations of optical imaging equipment and fuse multiple images with different depths of the field into one full-clear image. However, most existing deep learning-based fusion methods concentrate on the segmentation of focus–defocus regions, resulting in the loss of the details near the boundaries. To address the issue, this article proposes a novel generation adversarial network with dense connections (Fusion-UDCGAN) to fuse multifocus images. More specifically, the encoder and the decoder are first composed of dense modules with dense long connections to ensure the generated image’s quality. The content and clarity loss based on the $L1$ norm and the novel sum-modified-Laplacian (NSML) is further embedded to provide the fused images retaining more texture features. Considering that the previous dataset-making approaches may lose the relation between the overall structure and the information near the boundaries, a new dataset, which is uniformly distributed and can simulate natural focusing boundary conditions, is constructed for model training. Subjective and objective experimental results indicate that the proposed method significantly improves the sharpness, contrast, and detail richness compared to several state-of-the-art methods.
               
Click one of the above tabs to view related content.