Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such… Click to show full abstract
Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder–decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.
               
Click one of the above tabs to view related content.