Automatic image colorization without manual interventions is an ill‐conditioned and inherently ambiguous problem. Most of existing methods focus on formulating colorization as a regression problem and learn parametric mappings from… Click to show full abstract
Automatic image colorization without manual interventions is an ill‐conditioned and inherently ambiguous problem. Most of existing methods focus on formulating colorization as a regression problem and learn parametric mappings from grayscale to color through deep neural networks. Due to the multimodalities of color‐grayscale space, in many applications, it is not required to recover exact ground‐truth color. Pair‐wise pixel‐to‐pixel learning‐based algorithms lack rationality. Techniques such as color space conversion techniques are then proposed to avoid such direct pixel learning. However, the coloring results after color space conversion are blunt and unnatural. In this paper, we hold viewpoints that a reasonable solution is to generate some colorized result that looks natural. No matter what color a region is to be assigned, the colorized region should be semantically and spatially consistent. In this paper, we propose an effective semantic‐aware automatic colorization model via unpaired cycle‐consistent self‐supervised network. Low‐level monochrome loss, perceptual identity loss and high‐level semantic‐consistence loss, together with adversarial loss, are introduced to guide network self‐training. We train and test our model on randomly selected subsets from PASCAL VOC 2012. The experimental results including human subjective studies demonstrate that, compared with state‐of‐the‐art methods, our proposed model can achieve more convincing and superior results. Relevant source code is available at https://github.com/YuSuen/ACCycleGAN.
               
Click one of the above tabs to view related content.