Mean field reinforcement learning (MFRL) addresses the problem of dimensional explosion for large-scale multiagent systems. However, MFRL averages the actions of neighbors equally while discarding the diversity and distinct features… Click to show full abstract
Mean field reinforcement learning (MFRL) addresses the problem of dimensional explosion for large-scale multiagent systems. However, MFRL averages the actions of neighbors equally while discarding the diversity and distinct features between individuals, which may lead to poor performance in many application scenarios. In this article, a new MFRL algorithm termed temporal weighted mean filed Q-learning (TWMFQ) is proposed. TWMFQ introduces a temporal compensated multihead attention structure to construct the weighted mean-field framework, which can sort out the complex relationships within the swarm into the interactions between specific agent and the weighted virtual mean agent. This approach allows the mean Q-function to represent the swarm behavior more informatively and comprehensively. In addition, an advanced sampling mechanism called mixed experience replay is established, which enriches the diversity of samples and prevents the algorithm from falling into local optimal solution. The comparison experiments on MAgent and multi-USV platform justify the superior performance of TWMFQ across different population sizes.
               
Click one of the above tabs to view related content.