Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel… Click to show full abstract
Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel multi-exposure fusion approach to visual semantic enhancement of autonomous driving. Firstly, a multi-exposure image sequence is aligned to construct a stable image input. Secondly, high contrast regions of multi-exposure image sequences are evaluated by context aggregation network (CAN) to predict image weight map. Finally, the high-quality image is generated by weighted fusion of multi-exposure image sequences. The proposed approach is validated by using Cityscapes’ HDR dataset and real environment data. The experimental results show that the proposed method effectively restores lost features in the light changing images and enhances accuracy of subsequent semantic segmentation.
               
Click one of the above tabs to view related content.