AbstractIn complex environments with long-term changes such as light, seasonal and viewpoint changes, robust, accurate and high-frequency global positioning based on LiDAR map is still a challenge, which is crucial… Click to show full abstract
AbstractIn complex environments with long-term changes such as light, seasonal and viewpoint changes, robust, accurate and high-frequency global positioning based on LiDAR map is still a challenge, which is crucial for autonomous vehicles or robots. To this end, a novel observation model that relies on the siamese multi-task Convolutional Neural Networks (CNNs) with multi-module cascaded is creatively presented in this paper. In particular, a new pseudo image representing LiDAR submap is designed to enrich scene texture and enhance rotation invariance. Besides, a novel siamese CNNs that coupled by NeXtVLAD and Long Short-Term Memory (LSTM) is designed for the first time, which can reliably predict similarity and quaternion at the same time. Finally, the predicted quaternion observation is integrated into the extended Kalman filter framework for multi-sensor fusion to achieve robust high-frequency global pose estimation. Extensive evaluations on KITTI, NCLT, and real-world datasets suggest that the proposed method not only obtains the remarkable precision-recall performance, but also effectively enhances and improves the robustness and accuracy of long-term positioning.
               
Click one of the above tabs to view related content.