The emergence of autonomous driving vehicles on roads has increased the importance of research on autonomous driving in mixed-autonomous traffic. In mixed-autonomous traffic scenarios, it is necessary to comprehend the… Click to show full abstract
The emergence of autonomous driving vehicles on roads has increased the importance of research on autonomous driving in mixed-autonomous traffic. In mixed-autonomous traffic scenarios, it is necessary to comprehend the instability of autonomous vehicles and traffic flow corresponding to the uncertainty level in human-driven behaviors. However, studies of stability analysis in deep reinforcement learning are limited. This study focuses on the impact of deep reinforcement learning based autonomous vehicles in mixed-autonomous traffic from the stability perspective. We define the policy instability and traffic flow instabilities using the entropy of the velocity distributions to quantitatively measure the instability of an autonomous vehicle. Subsequently, we provide mathematical analyses to explain logarithmic growth patterns of instability. Moreover, we propose a novel deep reinforcement learning approach that jointly determines discrete and continuous actions under partial observation. To verify the proposed solution, we perform extensive simulations of various traffic scenarios (e.g., increasing traffic volumes, increasing the number of autonomous vehicles on the road, and setting the multiple uncertainty levels for human-driven behaviors) with ablation studies on reward function. Moreover, we analyze instabilities when human-driven vehicles are modeled using the human-like noisy controller and a policy that imitates actual human-driving data based on imitation learning. The simulation results support the theoretical analysis and confirm that the proposed method is stabler compared to a conventional control-theoretic approach.
               
Click one of the above tabs to view related content.