Simultaneous localization and mapping(SLAM), focusing on addressing the joint estimation problem of self-localization and scene mapping, has been widely used in many applications such as mobile robot, drone, and augmented… Click to show full abstract
Simultaneous localization and mapping(SLAM), focusing on addressing the joint estimation problem of self-localization and scene mapping, has been widely used in many applications such as mobile robot, drone, and augmented reality(AR). However, traditional state-of-the-art SLAM approaches are typically designed under the static-world assumption and prone to be degraded by moving objects when running in dynamic scenes. This article presents a novel semantic visual-inertial SLAM system for dynamic environments that, building on VINS-Mono, performs real-time trajectory estimation by utilizing the pixel-wise results of semantic segmentation. We integrate the feature tracking and extraction framework into the front-end of the SLAM system, which could make full use of the time waiting for the completion of the semantic segmentation module, to effectively track the feature points on subsequent images from the camera. In this way, the system can track feature points stably even in high-speed movement. We also construct the dynamic feature detection module that combines the pixel-wise semantic segmentation results and the multi-view geometric constraints to exclude dynamic feature points. We evaluate our system in public datasets, including dynamic indoor scenes and outdoor scenes. Several experiments demonstrate that our system could achieve higher localization accuracy and robustness than state-of-the-art SLAM systems in challenging environments.
               
Click one of the above tabs to view related content.