Similar appearing places are known to confuse place recognition methods (a.k.a perceptual aliasing). Robust visual SLAM methods have shown impressive resilience against scenarios having moderate perceptual aliasing. In this work,… Click to show full abstract
Similar appearing places are known to confuse place recognition methods (a.k.a perceptual aliasing). Robust visual SLAM methods have shown impressive resilience against scenarios having moderate perceptual aliasing. In this work, we evaluate visual SLAM in scenarios where perceptual aliasing occurs with greater frequency. Firstly we evaluate visual SLAM front-ends. We modify the environment by replicating a simple high textured patch. Surprisingly, this simple patch manages to by-pass all the geometric checks in the state of the art ORB-SLAM, resulting in false loop closures and hence corrupting the map. The patch can be printed on a simple paper, using a simple printer, or it can be digital content displayed on screens. We show that closeups (approaching doors, turning a corridor, interacting with an object) are the most vulnerable situations. This vulnerability exists in multiple loop closing pipelines. Secondly, we evaluate robust SLAM back-ends. These back-ends have shown the ability to recover from false positives under multiple policies (random, local, random grouped and locally grouped). We propose a novel policy, i.e. locally symmetric, for generating false loop closures which successfully attacks multiple robust back-ends and public datasets. Surprisingly, for commonly occurring weaker structures (multiple floors) even a single false loop manages to fool multiple robust back-ends. We hope these findings will motivate the community in evolving SLAM solutions for adversarial environments. To further enhance progress in this direction, we release a novel dataset with adversarial content in both visual SLAM front-ends and back-ends1.
               
Click one of the above tabs to view related content.