Image segmentation in robotics is an ongoing research field in which neural networks have shown promising performance. In this paper, we introduce MapSegNet, a deep convolutional neural network for indoor… Click to show full abstract
Image segmentation in robotics is an ongoing research field in which neural networks have shown promising performance. In this paper, we introduce MapSegNet, a deep convolutional neural network for indoor map segmentation, which is able to segment the indoor maps into smaller units, including rooms, corridors, windows, and furniture. The proposed model consists of an encoder phase to capture context and a corresponding decoder phase to increase the resolution of feature maps to the original resolution. A design for skip connections is introduced with fused multi-scale feature maps between the encoder and the decoder phases. The proposed skip connection increases the flow of information between the phases and improves the model generalization. We evaluate empirical studies based on abstract maps and detailed maps. While abstract maps include empty rooms and corridors, the detailed maps contain indoor space with more objects such as furniture. We investigate the effectiveness of the proposed method by employing various indoor maps and compare its performance with similar neural network models on multiple datasets. The results show that the proposed model is able to achieve more accurate recognition or lower computation cost compared to other state-of-the-art segmentation techniques.
               
Click one of the above tabs to view related content.