Uncertainty quantification is essential for robotic perception, as overconfident or point estimators can lead to collisions and damages to the environment and the robot. In this letter, we evaluate scalable… Click to show full abstract
Uncertainty quantification is essential for robotic perception, as overconfident or point estimators can lead to collisions and damages to the environment and the robot. In this letter, we evaluate scalable approaches to uncertainty quantification in single-view supervised depth learning, specifically MC dropout and deep ensembles. For MC dropout, in particular, we explore the effect of the dropout at different levels in the architecture. We show that adding dropout in all layers of the encoder brings better results than other variations found in the literature. This configuration performs similarly to deep ensembles with a much lower memory footprint, which is relevant for applications. Finally, we explore the use of depth uncertainty for pseudo-RGBD ICP and demonstrate its potential to estimate accurate two-view relative motion with the real scale.
               
Click one of the above tabs to view related content.