Due to recent advances in learning-based semantic segmentation, road scene parsing can usually achieve satisfactory results under normal illumination conditions. However, training a robust model for parsing nighttime road scenes… Click to show full abstract
Due to recent advances in learning-based semantic segmentation, road scene parsing can usually achieve satisfactory results under normal illumination conditions. However, training a robust model for parsing nighttime road scenes is still very challenging, especially when semantic labels of training samples are absent. In this paper, we propose a convolutional neural network (CNN)-based method for parsing nighttime road scenes in an unsupervised manner. The proposed system includes an appearance transferring module and a segmentation module, which are coupled together and learned in an end-to-end fashion. The appearance transferring module aims to transfer unlabeled images acquired during both daytime and nighttime into a shared latent feature space that encodes the image content of both scenes at the semantic level. Then, the segmentation module is used to map the feature to its corresponding semantic labels. To better evaluate the proposed model, we also construct a new semantic segmentation dataset including 1,566 nighttime images. The extensive experimental results on the proposed benchmark illustrate that the proposed model achieves significant improvement compared with the baselines as well as a recently released system.
               
Click one of the above tabs to view related content.