LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Dynamically Adaptive Approach to Reducing Strategic Interference for Multiagent Systems

Photo from wikipedia

Multiagent reinforcement learning (RL) is widely used and can successfully solve many problems in the real world. In the multiagent RL system, a global critic network is used to guide… Click to show full abstract

Multiagent reinforcement learning (RL) is widely used and can successfully solve many problems in the real world. In the multiagent RL system, a global critic network is used to guide each agent’s strategy to update to learn the most beneficial strategy for the collective. However, the global critic network also makes the current agent’s learning be affected by other agents’ strategies, which leads to unstable learning. To solve this problem, we propose dynamic decomposed multiagent deep deterministic policy gradient (DD-MADDPG): a new network that considers both global and local evaluations and adaptively adjusts the agent’s attention to the two evaluations. Besides, the use of the experience replay buffer by multiagent deep deterministic policy gradient (MADDPG) produces outdated experience, and the outdated strategies of other agents further affect the learning of the current agent. To reduce the influence of other agents’ outdated experience, we propose TD-Error and Time-based experience sampling (T2-PER) based on DD-MADDPG. We evaluate the proposed algorithm’s performance according to the learning stability and the average return obtained by the agents. We have conducted experiments in the MPE environment. The results show that the proposed method has better stability and higher learning efficiency than MADDPG and has a certain generalization ability.

Keywords: systems dynamically; multiagent; approach reducing; adaptive approach; dynamically adaptive; experience

Journal Title: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.