LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Improved Q-Learning Applied to Dynamic Obstacle Avoidance and Path Planning

Photo by bladeoftree from unsplash

Due to the complexity of interactive environments, dynamic obstacle avoidance path planning poses a significant challenge to agent mobility. Dynamic path planning is a complex multi-constraint combinatorial optimization problem. Some… Click to show full abstract

Due to the complexity of interactive environments, dynamic obstacle avoidance path planning poses a significant challenge to agent mobility. Dynamic path planning is a complex multi-constraint combinatorial optimization problem. Some existing algorithms easily fall into local optimization when solving such problems, leading to defects in convergence speed and accuracy. Reinforcement learning has certain advantages in solving decision sequence problems in complex environments. A Q-learning algorithm is a reinforcement learning method. In order to improve the value evaluation of the algorithm in solving practical problems, this paper introduces the priority weight into the Q-learning algorithm. The improved algorithm is compared with existing algorithms and applied to dynamic obstacle avoidance path planning. Experiments show that the improved algorithm dramatically improves the convergence speed and accuracy and increases the value evaluation. The improved algorithm finds the shortest path of 16 units in 27 seconds.

Keywords: path planning; path; avoidance path; obstacle avoidance; dynamic obstacle

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.