Multi-focus image fusion technology is to extract different focused regions of the same scene among partially focused images and merge them together to generate a composite image where all objects… Click to show full abstract
Multi-focus image fusion technology is to extract different focused regions of the same scene among partially focused images and merge them together to generate a composite image where all objects are clear. Two crucial points to multi-focus image fusion are the effective focus measurement method to evaluate the sharpness of the source images and the accurate segmentation method to extract the focused regions. In conventional multi-focus image fusion methods, the decision map obtained according to the focus measurement is sensitive to mis-registration, or produces an uneven boundary lines. In this paper, the maximum value in the top-hat transform and the bottom-hat transform is used as the gradient measurement value, and the complementary features between multiple scales are used to achieve accurate focus measurement for initial segmentation. In order to obtain a better fusion decision map, a robust image matting algorithm is used to refine the trimap generated by the initial segmentation. Then, make full use of the strong correlation between the source images to optimize the edge regions of the decision map to improve the image fusion quality. Finally, a fusion image is constructed based on the fusion decision map and the source images. We perform qualitative and quantitative experiments on publicly available databases to verify the effectiveness of the method. The results show that compared with several state-of-the-art algorithms, the proposed fusion method can obtain accurate decision maps and achieve better performance in visual perception and quantitative analysis.
               
Click one of the above tabs to view related content.