Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with… Click to show full abstract
Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with visual tasks, such as segmentation and tracking. This paper proposes an underwater image enhancement solution through a deep residual framework. First, the cycle-consistent adversarial networks (CycleGAN) is employed to generate synthetic underwater images as training data for convolution neural network models. Second, the very-deep super-resolution reconstruction model (VDSR) is introduced to underwater resolution applications; with it, the Underwater Resnet model is proposed, which is a residual learning model for underwater image enhancement tasks. Furthermore, the loss function and training mode are improved. A multi-term loss function is formed with mean squared error loss and a proposed edge difference loss. An asynchronous training mode is also proposed to improve the performance of the multi-term loss function. Finally, the impact of batch normalization is discussed. According to the underwater image enhancement experiments and a comparative analysis, the color correction and detail enhancement performance of the proposed methods are superior to that of previous deep learning models and traditional methods.
               
Click one of the above tabs to view related content.