Swift decision-making based on visual environment perception is crucial for autonomous control of visual underwater vehicles (VUVs) during underwater missions. However, learning perception and decision models individually might result in… Click to show full abstract
Swift decision-making based on visual environment perception is crucial for autonomous control of visual underwater vehicles (VUVs) during underwater missions. However, learning perception and decision models individually might result in weak robustness of overall control system as the mismatched state extraction and control decision making are asynchronous. As a remedy, we will introduce in this paper an end-to-end monocular autonomous reinforcement control (MARC) framework for autonomous control of VUVs, which is performed in two cascaded procedures, i.e., 1) perception, where a geometric network (GeoNet) is designed based on a convolutional encoder-decoder network to generate depth maps from input environmental videos; 2) decision, where with depth maps as input, a reinforcement control network (CtrlNet) integrates a convolutional neural network into a deep deterministic policy gradient network and outputs action decisions, which are refined by reinforcement learning algorithm for obstacle-avoiding based autonomous control. Numerical and experimental results demonstrate that the proposed MARC exhibits high-quality depth prediction and is capable of conducting obstacle-avoiding navigation and autonomous control of VUVs with high accuracy and strong robustness.
               
Click one of the above tabs to view related content.