An adaptive infrared and visible image fusion method based on visual saliency and hierarchical Bayesian (AVSHB) which preserves the highest similarity between fused images and source images is proposed in… Click to show full abstract
An adaptive infrared and visible image fusion method based on visual saliency and hierarchical Bayesian (AVSHB) which preserves the highest similarity between fused images and source images is proposed in this article. First, an effective salient edge-preserving filter (SEPF) is developed to decompose each source image into a base layer and a detail layer. Am $\ell _{1}$ -norm gradient minimization is first derived and then embedded into a two-scale acceleration scheme in SEPF. Benefiting from the SEPF, the edges of salient regions can be preserved without distortion. Then, an adaptive fusion scheme is proposed, which fully considers the characteristics of each layer. More concretely, we design a two-scale fusion strategy based on a visual saliency map (VSM) for the base layers, and a hierarchical Bayesian fusion model is derived for the detail layers. The experimental results on the TNO and RoadScene datasets and Nato camp image sequence demonstrate that AVSHB favorably outperforms 16 related state-of-the-art fusion methods qualitatively and quantitatively. AVSHB can generate improved fusion results by sufficiently retaining salient targets and rich details from the source images.
               
Click one of the above tabs to view related content.