LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image Fusion

Photo by meindrittesauge from unsplash

Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to… Click to show full abstract

Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to be solved. To alleviate the aforementioned issues, in this paper, we propose an attention-guided global-local adversarial learning network for fusing extreme exposure images in a coarse-to-fine manner. Firstly, the coarse fusion result is generated under the guidance of attention weight maps, which acquires the essential region of interest from both sides. Secondly, we formulate an edge loss function, along with a spatial feature transform layer, for refining the fusion process. So that it can take full use of the edge information to deal with blurry edges. Moreover, by incorporating global-local learning, our method can balance pixel intensity distribution and correct the color distortion on spatially varying source images from both image/patch perspectives. Such a global-local discriminator ensures all the local patches of the fused images align with realistic normal-exposure ones. Extensive experimental results on two publicly available datasets show that our method drastically outperforms state-of-the-art methods in visual inspection and objective analysis. Furthermore, sufficient ablation experiments prove that our method has significant advantages in generating high-quality fused results with appealing details, clear targets, and faithful color. Source code will be available at https://github.com/JinyuanLiu-CV/AGAL.

Keywords: fusion; multi exposure; attention; global local; exposure; image

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.