This paper proposes a visual-inertial navigation system (VINS) combined with a dynamic object detection (DOD) algorithm, to improve the localization and state estimate accuracy of autonomous driving vehicles (ADV) in… Click to show full abstract
This paper proposes a visual-inertial navigation system (VINS) combined with a dynamic object detection (DOD) algorithm, to improve the localization and state estimate accuracy of autonomous driving vehicles (ADV) in dynamic environments. Firstly, based on the YOLOv5 network, we train the proposed DOD model to detect dynamic objects in the road environment. Secondly, by removing the feature points in the dynamic object regions, we track the remaining feature points to eliminate the influence of the dynamic object. Furthermore, we model the global positioning system (GPS) measurement as a general factor and introduce its residual factor into the cost function to eliminate the cumulative error. Finally, we validate the performance of the proposed method on public datasets and real-world experiments. The results show that the proposed method can effectively eliminate the influence of the dynamic object and eliminate the cumulative error. It provides theoretical guidance for ADV navigation in dynamic or large-scale outdoor environments.
               
Click one of the above tabs to view related content.