In this letter, we build on recent advances in deep learning to improve SE(3) transformations, enabling more accurate motion estimation in mobile robots. We propose using denoising autoencoders (DAEs) to… Click to show full abstract
In this letter, we build on recent advances in deep learning to improve SE(3) transformations, enabling more accurate motion estimation in mobile robots. We propose using denoising autoencoders (DAEs) to address the challenges presented by modern LIDARs. Our proposed approach is comprised of two stages: a novel pre-processing stage for robust feature identification and a scan matching stage for motion estimation. In the pre-processing stage, LIDAR data are projected into a two-dimensional (2-D) image format and a DAE is used to extract salient features. These features are used as a mask for the original data, which is then re-projected into full 3-D space. Scan matching is performed on the re-projected data to estimate motion in SE(3). We analyze the performance of our approach using the real-world data from the University of Michigan North Campus long-term vision and LIDAR dataset and test generalization on LIDAR data from the KITTI dataset. We show that our approach generalizes across domains, is capable of reducing the per-estimate error of standard iterative closest point (ICP) methods on average by 25.5% for the translational component and 57.53% for the rotational component, and is capable of reducing the computation time of state-of-the-art ICP methods by a factor of 7.94 on average while achieving competitive performance.
               
Click one of the above tabs to view related content.