LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A novel multi-exposure fusion approach for enhancing visual semantic segmentation of autonomous driving

Photo by hyundaimotorgroup from unsplash

Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel… Click to show full abstract

Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel multi-exposure fusion approach to visual semantic enhancement of autonomous driving. Firstly, a multi-exposure image sequence is aligned to construct a stable image input. Secondly, high contrast regions of multi-exposure image sequences are evaluated by context aggregation network (CAN) to predict image weight map. Finally, the high-quality image is generated by weighted fusion of multi-exposure image sequences. The proposed approach is validated by using Cityscapes’ HDR dataset and real environment data. The experimental results show that the proposed method effectively restores lost features in the light changing images and enhances accuracy of subsequent semantic segmentation.

Keywords: multi exposure; visual semantic; autonomous driving; exposure; image; semantic segmentation

Journal Title: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.