The expansion of autonomous driving operations requires the research and development of accurate and reliable self-localization approaches. These include visual odometry methods, in which accuracy is potentially superior to GNSS-based… Click to show full abstract
The expansion of autonomous driving operations requires the research and development of accurate and reliable self-localization approaches. These include visual odometry methods, in which accuracy is potentially superior to GNSS-based techniques while also working in signal-denied areas. This paper presents an in-depth review of state-of-the-art visual and point cloud odometry methods, along with a direct performance comparison of some of these techniques in the autonomous driving context. The evaluated methods include camera, LiDAR, and multi-modal approaches, featuring knowledge and learning-based algorithms, which are compared from a common perspective. This set is subject to a series of tests on road driving public datasets, from which the performance of these techniques is benchmarked and quantitatively measured. Furthermore, we closely discuss their effectiveness against challenging conditions such as pronounced lighting variations, open spaces, and the presence of dynamic objects in the scene. The research demonstrates increased accuracy in point cloud-based methods by surpassing visual techniques by roughly 33.14% in trajectory error. This survey also identifies a performance stagnation in state-of-the-art methodologies, especially in complex conditions. We also examine how multi-modal architectures can circumvent individual sensor limitations. This aligns with the benchmarking results, where the multi-modal algorithms exhibit greater consistency across all scenarios, outperforming the best LiDAR method (CT-ICP) by 5.68% in translational drift. Additionally, we address how current AI advances constitute a way to overcome the current development plateau.
               
Click one of the above tabs to view related content.