This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to unmanned aerial vehicles, where such decision-making policies are trained upon supervision of… Click to show full abstract
This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to unmanned aerial vehicles, where such decision-making policies are trained upon supervision of actual human flight data. The neural networks are trained based on aggregated flight data from human experts, learning the implicit policy for visual obstacle avoidance by extracting the necessary features within the image. The images and flight data are collected from a simulated environment provided by Gazebo, and Robot Operating System is used to provide the communication nodes for the framework. The framework is tested and validated in various environments with respect to four types of neural network including fully connected neural networks, two- and three-dimensional convolutional neural networks (CNNs), and recurrent neural networks (RNNs). Among the networks, sequential neural networks (i.e., 3D-CNNs and RNNs) provide the better performance due to its ability to explicitly consider the dynamic nature of the obstacle avoidance problem.
               
Click one of the above tabs to view related content.