Autonomous vehicles in a platoon determine the control inputs based on the system state information collected and shared by the Internet of Things (IoT) devices. Deep reinforcement learning (DRL) is… Click to show full abstract
Autonomous vehicles in a platoon determine the control inputs based on the system state information collected and shared by the Internet of Things (IoT) devices. Deep reinforcement learning (DRL) is regarded as a potential method for car-following control and has been mostly studied to support a single following vehicle. However, it is more challenging to learn an efficient car-following policy with convergence stability when there are multiple following vehicles in a platoon, especially with unpredictable leading vehicle behavior. In this context, we adopt an integrated DRL and dynamic programming (DP) approach to learn autonomous platoon control policies, which embeds the deep deterministic policy gradient (DDPG) algorithm into a finite-horizon value iteration framework. Although the DP framework can improve the stability and performance of DDPG, it has the limitations of lower sampling and training efficiency. In this article, we propose an algorithm, namely, finite-horizon-DDPG with sweeping through reduced state space using stationary approximation (FH-DDPG-SS), which uses three key ideas to overcome the above limitations, i.e., transferring network weights backward in time, stationary policy approximation for earlier time steps, and sweeping through reduced state space. In order to verify the effectiveness of FH-DDPG-SS, simulation using real driving data is performed, where the performance of FH-DDPG-SS is compared with those of the benchmark algorithms. Finally, platoon safety and string stability for FH-DDPG-SS are demonstrated.
               
Click one of the above tabs to view related content.