Infrared and visible images are captured using different sensors, resulting in various differences between the two modalities. However, current image fusion methods mainly focus on retaining global information, while neglecting… Click to show full abstract
Infrared and visible images are captured using different sensors, resulting in various differences between the two modalities. However, current image fusion methods mainly focus on retaining global information, while neglecting to preserve the salient objects of the two source images. Additionally, existing evaluation metrics fail to measure whether the salient objects are preserved in the fused image from the two original modalities. To this end, we propose a novel image fusion method for infrared and visible images called SCFusion, which maintains salient objects consistency between the original two modalities and the fused image. Specifically, we designed a new module called the saliency decision (SD) to separate the unique and common saliency maps from the infrared and visible images for target enhancement in the final fused image. We then introduce a new metric called saliency information weight (SIW) to evaluate the preservation of salient objects by calculating the overlap between the saliency map of the fused image and those of the original modalities. To validate the practical application of our fusion algorithm, we establish a physical visible-infrared fusion system integrating SCFusion to provide real-time service, including a dual-sensor camera and an AI edge platform. Quantitative and qualitative experiments demonstrate the superiority of SCFusion over state-of-the-art methods in terms of salient objects preservation from the original two modalities.
               
Click one of the above tabs to view related content.