In this work, we introduce a dual transformation network for single image contrast enhancement, which usually aims to improve global contrast and enrich local details. To this end, we propose… Click to show full abstract
In this work, we introduce a dual transformation network for single image contrast enhancement, which usually aims to improve global contrast and enrich local details. To this end, we propose two parallel branches to respectively handle the two goals by learning different kinds of transformations. Specifically, one branch aims to construct a global transformation curve to improve global contrast, while the other one directly predicts pixel offsets to enrich local details. In addition, we further design a differentiable histogram loss to provide supervised information related to the global contrast. In this way, the network training can be guided by different constraints, e.g., pixel-level mean squared error and statistics-level histogram error. Experiments demonstrate that our method can be effectively applied to various contrast conditions with favorable performance against the state-of-the-art methods.
               
Click one of the above tabs to view related content.