Robust long-term visual localization is challenging for a mobile robot, especially in changing environments, where the dynamic scene changes degrade localization accuracy or even cause failures. Most of the existing… Click to show full abstract
Robust long-term visual localization is challenging for a mobile robot, especially in changing environments, where the dynamic scene changes degrade localization accuracy or even cause failures. Most of the existing methods eliminate dynamic changes as outliers, which strictly rely on static world assumptions. Conversely, we efficiently exploit the hidden regularities of changes for improving localization performance. In particular, we design a feature existence state (FES) matrix to measure the evolution of time-varying changes, which is built incrementally over long-term runs. To address the timeliness problems of fixed parameters in offline-trained models, we propose an adaptive online stochastic learning (AOSL) method to model and predict the changing regularities of streaming feature states. Therefore, the features with the largest probability of being observed can be selected for boosting visual localization. Leveraging the proposed AOSL method, we develop a lightweight and robust long-term topological localization system. Furthermore, the performance of our method is compared against the state-of-the-art methods in different challenging scenes, including both the public benchmark and real-world experiments. Extensive experimental results validate that our method achieves better localization accuracy and memory efficiency, and has competitive real-time performance.
               
Click one of the above tabs to view related content.