Visual simultaneous localization and mapping (VSLAM) is an appropriate method for positioning and navigation of intelligent unmanned systems under Global Navigation Satellite Systems (GNSS)-denied environment, but it is still facing… Click to show full abstract
Visual simultaneous localization and mapping (VSLAM) is an appropriate method for positioning and navigation of intelligent unmanned systems under Global Navigation Satellite Systems (GNSS)-denied environment, but it is still facing some dilemmas in repetitive large-scale environments. In this article, a VSLAM method based on bifocal-binocular vision is proposed. By introducing the binocular camera with different focal lengths, the perception ability of the system in vast space is improved as the designed cameras could complement each other at different working distances. Meanwhile, considering the inherent structure of the scene, additional optimization is proposed to reduce the accumulated error based on the markers distribution knowledge obtained from online placement inference. The algorithm proposed in this article significantly improves the stability and accuracy of the VSLAM system in repetitive large-scale scenes, and is validated in both virtual datasets and real-world environments.
               
Click one of the above tabs to view related content.