Nowadays, researchers use vision-based measurement tools to record, detect, and monitor an atmospheric phenomenon called haze. It impedes the proper functioning of many outdoor industrial systems, such as autonomous driving,… Click to show full abstract
Nowadays, researchers use vision-based measurement tools to record, detect, and monitor an atmospheric phenomenon called haze. It impedes the proper functioning of many outdoor industrial systems, such as autonomous driving, surveillance, satellite imagery, and so on. Conventional visibility restoration methods cannot accurately recover image quality due to inaccurate estimations of haze thickness and the presence of color-cast effects. Deep neural networks are evolving due to their ability to directly dehaze images from hazy scenes. Therefore, a unique attention-based end-to-end dehazing network named oval-net has been proposed in this study to restore clear images from its counterpart without employing the atmospheric scattering model. The oval-net is an encoder–decoder architecture that uses spatial and channel attention at each stage to focus on dominant and significant information while avoiding the transmission of irrelevant information from the encoder to the decoder, allowing quicker convergence. The proposed approach outperforms seven state-of-the-art algorithms in quantitative and qualitative assessments of a variety of synthetic and real-world hazy images, proving its effectiveness for vision-based industrial systems.
               
Click one of the above tabs to view related content.