Feature-based visual simultaneous localization and mapping (SLAM) is an effective localization approach for robots in unknown environments. Classic handcrafted features perform well in 2D image matching tasks. However, in the… Click to show full abstract
Feature-based visual simultaneous localization and mapping (SLAM) is an effective localization approach for robots in unknown environments. Classic handcrafted features perform well in 2D image matching tasks. However, in the tracking task of SLAM, the region at the edge of the object in the image is often unstable because of the lack of spatial information. In this paper, we refer to the features at the edge of the object as edge-features and propose an effective method to process the edge-features in SLAM named Edge-Feature Razor (EF-Razor) for the above problems. EF-Razor first uses the semantics provided by the object detection YOLOv3 to distinguish edge-features. Through additional constraints on edge-features matching in the tracking process, EF-Razor can effectively reduce the impact of unstable features on the SLAM system. Then, EF-Razor adjusts the information matrix to increase the system’s trust in the filtered features. This will make the calculation result of the bundle adjustment more stable. In order to evaluate the proposed method, we integrate EF-Razor to ORB-SLAM2 and perform experiments. The comparison results based on public datasets show the proposed method could effectively reduce the absolute trajectory error by 7%.
               
Click one of the above tabs to view related content.