We propose mVIL-Fusion, a three-level multisensor fusion system that is able to achieve robust state estimation and globally consistent mapping in perceptually degraded environments. First, LiDAR depth-assisted visual-inertial odometry (VIO)… Click to show full abstract
We propose mVIL-Fusion, a three-level multisensor fusion system that is able to achieve robust state estimation and globally consistent mapping in perceptually degraded environments. First, LiDAR depth-assisted visual-inertial odometry (VIO) with LiDAR odometry (LO) synchronous prediction and distortion correction functions is proposed as the frontend of our system. Second, a novel double-sliding-window-based optimization of midend joints of LiDAR scan-to-scan translation constraints (VIO status detection function) and scan-to-map rotation constraints (local mapping function) is used to enhance the accuracy and robustness of the state estimation. In the backend, loop closures of local-map-based keyframes are identified with altitude verification, and the global map is generated by incremental smoothing of a pose-only factor graph with altitude prior. The performance of our system is verified on both a public dataset and several self-collected sequences in challenging environments. To benefit the robotics community, our implementation is available at https://github.com/Stan994265/mVIL-Fusion.
               
Click one of the above tabs to view related content.