LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Two Exposure Fusion Using Prior-Aware Generative Adversarial Network

Photo from wikipedia

Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use… Click to show full abstract

Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content-prior-guided encoder and a detail-prior-guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods.

Keywords: network; fusion; aware generative; fusion using; generative adversarial; prior aware

Journal Title: IEEE Transactions on Multimedia
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.