Abstract This manuscript presents an Adam optimization-based Deep Reinforcement Learning model for Mixed Traffic Flow control (ADRL-MTF), so as to guide Connected and Autonomous vehicle’s (CAV) longitudinal trajectory on a… Click to show full abstract
Abstract This manuscript presents an Adam optimization-based Deep Reinforcement Learning model for Mixed Traffic Flow control (ADRL-MTF), so as to guide Connected and Autonomous vehicle’s (CAV) longitudinal trajectory on a typical urban roadway with signal-controlled intersections. Two improvements are made when compared with prior literatures. First, the common assumptions to simplify the problem solving, such as dividing a vehicle trajectory into several segments with constant acceleration/deceleration, are avoided, to improve the modeling realism. Second, built on the efficient Adam Optimization and Deep Q-Learning, the proposed model avoids the enumeration of states and actions, and is computational efficient and suitable for real time applications. The mixed traffic flow dynamic is firstly formulated as a finite Markov decision process (MDP) model. Due to the discretization of time, space and speed, this MDP model becomes high-dimensional in state, and is very challenging to solve. We then propose a temporal difference-based deep reinforcement learning approach, with -greedy for exploration-exploitation balance. Two neural networks are developed to replace the traditional Q function and generate the targets in the Q-learning update. These two neural networks are trained by the Adam optimization algorithm, which extends stochastic gradient descent and considers the second moments of the gradients, and is thus highly computational efficient and has lower memory requirements. The proposed model is shown to reduce fuel consumption by 7.8%, which outperforms a prior benchmark model based on Monte Carlo Tree Search. The model’s runtime efficiency and stability are tested, and the sensitivity analysis is also performed.
               
Click one of the above tabs to view related content.