LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Integrated Localization and Tracking for AUV With Model Uncertainties via Scalable Sampling-Based Reinforcement Learning Approach

Photo by hajjidirir from unsplash

This article studies the joint localization and tracking issue for the autonomous underwater vehicle (AUV), with the constraints of asynchronous time clock in cyberchannels and model uncertainty in physical channels.… Click to show full abstract

This article studies the joint localization and tracking issue for the autonomous underwater vehicle (AUV), with the constraints of asynchronous time clock in cyberchannels and model uncertainty in physical channels. More specifically, we develop a reinforcement learning (RL)-based asynchronous localization algorithm to localize the position of AUV, where the time clock of AUV is not required to be well synchronized with the real time. Based on the estimated position, a scalable sampling strategy called multivariate probabilistic collocation method with orthogonal fractional factorial design (M-PCM-OFFD) is employed to evaluate the time-varying uncertain model parameters of AUV. After that, an RL-based tracking controller is designed to drive AUV to the desired target point. Besides that, the performance analyses for the integration solution are also presented. Of note, the advantages of our solution are highlighted as: 1) the RL-based localization algorithm can avoid local optimal in traditional least-square methods; 2) the M-PCM-OFFD-based sampling strategy can address the model uncertainty and reduce the computational cost; and 3) the integration design of localization and tracking can reduce the communication energy consumption. Finally, simulation and experiment demonstrate that the proposed localization algorithm can effectively eliminate the impact of asynchronous clock, and more importantly, the integration of M-PCM-OFFD in the RL-based tracking controller can find accurate optimization solutions with limited computational costs.

Keywords: localization; reinforcement learning; auv; localization tracking; scalable sampling; model

Journal Title: IEEE Transactions on Systems, Man, and Cybernetics: Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.