Existing shadow removal methods often struggle with two problems: color inconsistencies in shadow areas and artifacts along shadow boundaries. To address these two problems, we propose a novel shadow mask-based… Click to show full abstract
Existing shadow removal methods often struggle with two problems: color inconsistencies in shadow areas and artifacts along shadow boundaries. To address these two problems, we propose a novel shadow mask-based semantic-aware network (S2Net) that uses shadow masks as guidance for shadow removal. The color inconsistency problem is solved in two steps. First, we use a series of semantic-guided dilated residual (SDR) blocks to transfer statistical information from non-shadow areas to shadow areas. The shadow mask-based semantic transformation (SST) operation in SDR enables the network to remove shadows while keeping non-shadow areas intact. Then, we design a refinement block by incorporating semantic knowledge of shadow masks and applying the learned modulated convolution kernels to get traceless and consistent output. To remove artifacts along shadow boundaries, we propose a newly designed boundary loss. The boundary loss encourages spatial coherence around shadow boundaries. By including the boundary loss as part of the loss function, a significant portion of artifacts along shadow boundaries can be removed. Extensive experiments on the ISTD, ISTD+, SRD and SBU datasets show our S2Net outperforms existing shadow removal methods.
               
Click one of the above tabs to view related content.