Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic… Click to show full abstract
Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. The LightingNet consists of two core components: 1) the complementary learning sub-network and 2) the vision transformer (VIT) low-light enhancement sub-network. VIT low-light enhancement sub-network is designed to learn and fit the current data to provide local high-level features through a full-scale architecture, and the complementary learning sub-network is utilized to provide global fine-tuned features through learning transfer. Extensive experiments confirm the effectiveness of the proposed LightingNet.
               
Click one of the above tabs to view related content.