On the pursuit of autonomous flying robots, the scientific community has been developing onboard real-time algorithms for localisation, mapping and planning. Despite recent progress, the available solutions still lack accuracy… Click to show full abstract
On the pursuit of autonomous flying robots, the scientific community has been developing onboard real-time algorithms for localisation, mapping and planning. Despite recent progress, the available solutions still lack accuracy and robustness in many aspects. While mapping for autonomous cars had a substantive boost using deep-learning techniques to enhance LIDAR measurements using image-based depth completion, the large viewpoint variations experienced by aerial vehicles are still posing major challenges for learning-based mapping approaches. In this letter, we propose a depth completion and uncertainty estimation approach that better handles the challenges of aerial platforms, such as large viewpoint and depth variations, and limited computing resources. The core of our method is a novel compact network that performs both depth completion and confidence estimation using an image-guided approach. Real-time performance onboard a GPU suitable for small flying robots is achieved by sharing deep features between both tasks. Experiments demonstrate that our network outperforms the state-of-the-art in depth completion and uncertainty estimation for single-view methods on mobile GPUs. We further present a new photorealistic aerial depth completion dataset that exhibits more challenging depth completion scenarios than the established indoor and car driving datasets. The dataset includes an open-source, visual-inertial UAV simulator for photo-realistic data generation. Our results show that our network trained on this dataset can be directly deployed on real-world outdoor aerial public datasets without fine-tuning or style transfer.
               
Click one of the above tabs to view related content.