Nowadays, multiagent reinforcement learning (MARL) have shared significant advances in the adaptive traffic signal control (ATSC) problems. For most of the researches, agents are all isomorphic, which disregards the situation… Click to show full abstract
Nowadays, multiagent reinforcement learning (MARL) have shared significant advances in the adaptive traffic signal control (ATSC) problems. For most of the researches, agents are all isomorphic, which disregards the situation in which isomerous intersections cooperative together in a real ATSC scenario, especially in epidemic regions where different intersections have quite different levels of importance. To this end, this paper models the ATSC problem as a networked Markov game (NMG), in which agents take into account information, including traffic conditions of it and its connected neighbors. A cooperative MARL framework named neighborhood cooperative hysteretic DQN (NC-HDQN) is proposed. Specifically, for each NC-HDQN agent in the NMG, first, the framework analyses correlation degrees with their connected neighbors and weighs observations and rewards by these correlations. Second, NC-HDQN agents independently optimize their strategies on the weighted information using hysteretic DQN (HDQN), which is designed to learn optimal joint strategies in cooperative multiagent games. Third, a rule-based NC-HDQN method and a Pearson correlation coefficient based NC-HDQN method, i.e., empirical NC-HDQN (ENC-HDQN) and Pearson NC-HDQN (PNC-HDQN), respectively, are designed. The first method maps the correlation degree between two connected agents according to vehicle numbers on roads between the two agents. In contrast, the second method uses the Pearson correlation coefficient to calculate the correlation degree adaptively. Our methods are empirically evaluated in both a synthetic scenario and two real-world traffic scenarios and give better performances in almost every standard test metric for ATSC.
               
Click one of the above tabs to view related content.